DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 7-8, 10, 12-15, and 17 are pending.
Claims 7, 10, 12, 13, 15, and 17 have been amended.
Claims 1-6, 9, 11, 16, and 18 have been canceled.
Response to Amendment
Objection to the Drawings: Applicant’s amendment to the specification and the replacement sheet overcome the objections of record. The objections to the drawings are withdrawn.
Objections to the Specification: Applicant’s amendment to the specification overcome the objections of record. The objections to the specification are withdrawn.
Objections to the Claims: Applicant has canceled claim 11. The objection to claim 11 is withdrawn.
Rejections Under 35 U.S.C. §112(b): Applicant’s amended and canceled claims overcome the rejection of record. The 112(b) rejections are withdrawn.
Response to Arguments
Rejections Under 35 U.S.C. §101: Applicant's arguments filed 09/12/2025 have been fully considered but they are not persuasive.
Applicant argues “Claims 1-3 and 5-18 are rejected under 35 U.S.C. § 101 as allegedly being directed to an abstract idea without significantly more. In the interest of expediting prosecution, independent claim 1 is hereby canceled without prejudice, disclaimer, or waiver, and independent claims 7 and 12, and others, are amended. The rejection is respectfully traversed.
To assess eligibility under 35 U.S.C. § 101, the Supreme Court applies a two-step framework. First, it must be determined whether the claims are "directed to" a judicial exception such as an abstract idea, law of nature, or natural phenomenon. If so, it must then be determined whether the claim elements, considered individually and as an ordered combination, amount to "significantly more" than the exception. Alice Corp. v. CLS Bank Int'l, 573 U.S. 208, 217-18 (2014); Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 77-80 (2012).
As clarified in Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), claims that are directed to improvements in the functioning of a computer or system itself are not abstract. Similarly, in McRO, Inc. v. Bandai Namco Games America Inc., 837 F.3d 1299 (Fed. Cir. 2016), the Federal Circuit emphasized that claims which recite a specific set of rules or processes that improve existing technological methods are patent-eligible. Claims that are directed to a system using technical elements to achieve a specific, practical outcome can be found patent-eligible, especially when they present a specific technical solution rooted in a new combination or application of those elements to solve a technical problem. Thales Visionix Inc. v. United States, 850 F.3d 1343 (Fed. Cir. 2017).
Independent claim 7 is hereby amended for clarification and now recites, inter alia, the following features:
a recognition unit configured to receive object information collected by a sensor of the autonomous vehicle to recognize an environment; a V2X reception message processing unit configured to receive object information collected by an InfraEdge system; and
a determination and control unit configured to use the object information collected by the sensor and the object information collected by the InfraEdge system, and perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system,
wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time,
wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object.
Independent claim 12, which has its own claim scope, is also hereby amended to recite similar aspects.
It is respectfully submitted that the amended claims are not directed to a judicial exception, that they successfully integrate any alleged abstract idea into a practical application, and that they amount to significantly more than such an exception.
The claims are not directed to a judicial exception. This case is similar to the facts in Thales, where claims directed to a system for tracking motion using inertial sensors were found patent- eligible because they recited a specific technical solution rooted in sensor fusion and real-time processing. Like Thales, the present claims recite a specific architecture and processing logic for integrating object recognition data from disparate sources (vehicle sensors and InfraEdge systems), correcting for convergence delay, and making autonomous driving determinations based on reflection area logic. These are not abstract mental processes, but concrete improvements to autonomous vehicle control systems.
Even if the claims were viewed as reciting an abstract idea, they integrate the alleged exception into a practical application. Specifically, the claims recite a system and method that dynamically arbitrates between sensor-detected and InfraEdge-detected objects based on corrected location and reflection area inclusion. This arbitration logic is not generic data processing-it is tied to real-time autonomous driving decisions and improves the reliability of object recognition under latency conditions.
Finally, the claims recite significantly more than any alleged abstract idea. The claimed subject matter includes specific structural components (e.g., recognition unit, V2X reception message processing unit, determination and control unit) and recites detailed operations, such as correcting object location based on convergence delay, evaluating reflection area inclusion, and discarding or using objects based on recognition source. These steps are not routine or conventional-they reflect a tailored solution to the technical problem of latency-aware object arbitration in autonomous driving.
In view of the above, it is respectfully submitted that independent claims 7 and 12 recite patent-eligible subject matter, and that the dependent claims are eligible at least based on their dependence from an eligible base claim, as well as for their own recitations. Favorable reconsideration and withdrawal of the rejection under 35 U.S.C. § 101 are respectfully requested. ”
Examiner respectfully disagrees, that “the amended claims are not directed to a judicial exception, that they successfully integrate any alleged abstract idea into a practical application, and that they amount to significantly more than such an exception. The claims are not directed to a judicial exception. This case is similar to the facts in Thales, where claims directed to a system for tracking motion using inertial sensors were found patent- eligible because they recited a specific technical solution rooted in sensor fusion and real-time processing. Like Thales, the present claims recite a specific architecture and processing logic for integrating object recognition data from disparate sources (vehicle sensors and InfraEdge systems), correcting for convergence delay, and making autonomous driving determinations based on reflection area logic. These are not abstract mental processes, but concrete improvements to autonomous vehicle control systems” as the claims as drafted recites elements such as a recognition unit, a V2X reception message processing unit, and a determination and control unit which are recited at a high level of generality and can be implement as a computer system, consistent with Page 19 lines 13-19 from Applicant’s specification as filed, which is an example of a general purpose computer that applies the judicial exceptions such as the abstract ideas, as they can be perform in the human mind and/or with the aid of pen and paper, of at least “perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system”, “corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time”, and “considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object” with respect to claim 1, and at least “establishing a driving plan for autonomous driving in consideration of a convergence delay time which is a time difference between a time recorded according to reception of the second object information and a current time with respect to claim 12. Further in order to show improvement to computer functionality, to other technology, or technical field “the claim must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component of machinery to qualify as an improvement to an existing technology. See MPEP §2106.05(f) for more information about mere instructions to apply an exception” (also see at least MPEP §2106.05(a)). Specifically, the claims as currently drafted, merely recite instructions to perform method of infrastructure dynamic object recognition information convergence processing in an autonomous vehicle on generic component(s) or machinery. Therefore due to the recitation of the system (computer system) at a high generality and mere recitation of instructions to perform the method on these generic components, for example in the limitations detailed above, the claim would not qualify as an improvement to computer functionality, an existing technology, and/or specific technological improvements.
Examiner also respectfully disagrees that “claimed subject matter includes specific structural components (e.g., recognition unit, V2X reception message processing unit, determination and control unit) and recites detailed operations, such as correcting object location based on convergence delay, evaluating reflection area inclusion, and discarding or using objects based on recognition source. These steps are not routine or conventional-they reflect a tailored solution to the technical problem of latency-aware object arbitration in autonomous driving” as the specific structural components (e.g., recognition unit, V2X reception message processing unit, determination and control unit) are recited at a high generality that amount to generic computer components and can be implemented as a generic computer system, Frye (US2023/0091772A1) teaches deactivating vehicle sensor systems when information transferred by way of V2X messages are reliable and further determining reliability (plausibility) based at least on latency times (see at least [0053] and [0067]), and Lim (US2020/0026290A1) teaches correcting position information included in a V2X message based on the value of time (see at least [0479]-[0480] also see at least [0430]) therefore the steps do not amount to being not routine or conventional. As such the claims are ineligible. Therefore the 101 rejections are maintained.
Rejections Under 35 U.S.C. §103: Applicant's arguments filed 09/12/2025 have been fully considered but they are not persuasive.
Applicant argues “Claims 1, 4, 7, and 12 are rejected under 35 U.S.C. § 103 as allegedly being obvious over Yang et al. (US 2023/0211776 Al) in view of Frye (US 2023/0091772 Al). Claim 2 is rejected under 35 U.S.C. § 103 as allegedly being obvious over Yang and Frye in further view of Urano et al. (US 2022/0397895 Al). Claims 3, 8, 13, 16, and 18 are rejected under 35 U.S.C. § 103 as allegedly being obvious over Yang and Frye in further view of Hwang et al. (US 2020/0021960 Al). Claims 5, 9, 11, and 14 are rejected under 35 U.S.C. § 103 as allegedly being obvious over Yang and Frye in further view of Lim et al. (US 2020/0026290 Al). Claims 6, 10, and 15 are rejected under 35 U.S.C. § 103 as allegedly being obvious over Yang, Frye, and Lim in further view of Lee et al. (US 2022/0332327 Al). Claim 17 is rejected under 35 U.S.C. § 103 as allegedly being obvious over Yang, Frye, Hwang, and Lim. These rejections are respectfully traversed.
In order for an obviousness rejection to be proper, the Examiner must meet the burden of establishing that all elements of the invention are disclosed in the prior art; that the prior art relied upon, coupled with knowledge generally available in the art at the time of the invention, must contain some suggestion or incentive that would have motivated the skilled artisan to modify a reference or combined references; and that the proposed modification of the prior art must have had a reasonable expectation of success, determined from the vantage point of the skilled artisan at the time the invention was made. In re Fine, 5 U.S.P.Q.2d 1596, 1598 (Fed. Cir. 1988); In re Wilson, 165 U.S.P.Q. 494, 496 (C.C.P.A. 1970); Amgen v. Chugai Pharmaceuticals Co., 18 U.S.P.Q.2d, 1016, 1023 (Fed. Cir. 1991). See also MPEP § 2143.
In the interest of expediting prosecution, claims 1-6 are hereby canceled without prejudice, disclaimer, or waiver. Independent claims 7 and 12 have been amended for clarification and to incorporate the subject matter of former dependent claims 9, 11, 16, and 18. Accordingly, the rejection under 35 U.S.C. § 103 based on Yang and Frye is respectfully traversed and rendered moot with respect to the amended claims. Specifically, amended independent claim 7 now recites, inter alia, the following features:
a determination and control unit configured to use the object information collected by the sensor and the object information collected by the InfraEdge system, and perform a determination for autonomous driving in consideration of a reflection area of the
InfraEdge system,
wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a
current time,
wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle,
discards the recognized object.
Independent claim 12, which has its own patent claim scope, is also hereby amended to recite similar aspects.
It is respectfully submitted that the various combinations of references fail to disclose the above features. Accordingly, it is respectfully submitted that the various combinations of references cannot properly be relied upon for establishing a prima facie case of obviousness.
According to the applied references, due to the occurrence of fusion delay time caused by the recognition time difference between infrastructure and autonomous vehicles, as well as communication delay time, it is difficult to determine whether an object detected by the infrastructure and an object detected by the autonomous vehicle correspond to the same object or to different objects. Ambiguity must be resolved in favor of Applicant, i.e., in favor of the patentability of the claimed subject matter. See, e.g., Inre Hofstetter, 362 F.2d 293, 298 (C.C.P.A. 1966) (standing for the proposition that since the United States Patent and Trademark Office bears the ultimate burden of proving unpatentability, it is proper to resolve doubt in favor of patentability).
The presently claimed embodiments solve the aforementioned problem. Specifically, by comprehensively considering infrastructure recognition information, recognition time, and communication delay time, the claimed embodiments correct the position of the object recognized by the infrastructure and determine whether it is identical to the object recognized by the autonomous vehicle, thereby supporting the safe autonomous driving of the autonomous vehicle.
The claimed embodiments can be characterized in that the determination is based on recognition accuracy-namely, by comparing the recognition accuracy of the infrastructure-edge system and that of the autonomous vehicle system to ultimately select the object. More specifically, by considering the corrected positional information of the object and the reflection region of the infrastructure-edge system, if the object falls within the reflection region, the object recognized by the infrastructure-edge system is utilized for autonomous driving judgment, whereas if the object is recognized by the sensors of the autonomous vehicle, such object is excluded from the judgment process.
The above is reflected in the amended claim features, including:
-"wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time" (claim 7);
-"wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object" (claim 7); and
-"wherein, in step (b), it is confirmed whether the location of the second object information on which the correction is performed in consideration of the convergence delay time is included in the reflection area of the InfraEdge system, and when the location of the second object information is included in the reflection area, the second object information is used to establish the driving plan" (claim 12).
The Office Action appears to admit that Yang does not disclose the determination and control unit at page 11.
The Office Action cites paragraph [0067] of Frye for disclosing a process in which the reliability of V2X-transmitted information is verified based on criteria such as plausibility, latency, and comparison with onboard sensor data. If the V2X information is deemed reliable, the vehicle's surround sensor system may be deactivated or its transmit power reduced.
However, Frye's paragraph [0067] merely reads:
In step 340, it is verified whether the information transferred by way of the V2X messages is reliable. Various criteria can be used for this purpose, for example the plausibility in relation to messages received earlier, a comparison with object lists generated by the surround sensor system on board the vehicle, latency times, and/or other criteria. If it is detected in step 340 that the information transferred by way of the V2X messages is reliable, then either the active surround sensor system on board the vehicle is deactivated in full, in accordance with step 352, or the transmit power of the active surround sensor system on board the vehicle is reduced, in accordance with step 354. In this case, it is possible for the vehicle itself to measure the interference and to decide, on the basis of the result of the interference measurement, whether and to what extent the transmit power is reduced. The reduced transmit power can, for example, be selected such that the range is still adequate for emergency braking but more distant objects can only be received
from the infrastructure.
This citation discloses controlling the operation of the vehicle's surrounding sensor system (activation/deactivation, transmission power adjustment) according to the result of reliability verification of V2X messages. However, it neither discloses nor suggests the specific technical feature of the presently claimed embodiments, in which the corrected positional information of an object and the reflection region of the infrastructure-edge system are considered, such that, if the object falls within the reflection region, the object recognized by the infrastructure-edge system is utilized for autonomous driving judgment, whereas if the object is recognized by the sensors of the autonomous vehicle, it is not utilized for such judgment.
Further, the citations to Lim, Hwang, Urano, and Lee have not been shown to cure the above-discussed deficiencies. Accordingly, the proposed combinations are likewise deficient, even considering the knowledge of one of ordinary skill in the art.
In view of the above, it is respectfully submitted that independent claims 7 and 12 recite patentable subject matter, and that the dependent claims are allowable at least based on their dependence from an allowable base claim, as well as for their own recitations. Favorable reconsideration and withdrawal of the rejections under 35 U.S.C. § 103 are respectfully requested.”
Examiner respectfully disagrees, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Specifically that Frye (US2023/0091772A1) neither discloses nor suggests the specific technical feature of the presently claimed embodiments, in which the corrected positional information of an object and the reflection region of the infrastructure-edge system are considered, such that, if the object falls within the reflection region, the object recognized by the infrastructure-edge system is utilized for autonomous driving judgment, whereas if the object is recognized by the sensors of the autonomous vehicle, it is not utilized for such judgment” as Frye teaches an infrastructure system that transmits object lists, for example objects in the area of influence and their properties, in the form of a V2X message to vehicles in the area of influence (see at least [0053] “FIG. 1A shows an infrastructure system 10 for assisting at least partially automated driving of vehicles traveling in the area of influence 17 of the infrastructure system 10…To acquire the measurement data, the infrastructure sensor 12 has a particular transmit power, indicated by the schematically illustrated wave fronts 14. The infrastructure system 10 has an arithmetic logic unit 20, which generates object lists from the acquired surroundings information. The object lists include, for example, objects in the area of influence 17 and their properties, such as positions, speeds, movement directions, etc. The infrastructure system 10 has a communication unit 15, which is configured to transmit the object lists in the form of a V2X message 35 (for example a cooperative perception message, CPM) to vehicles 40, 50, 60, 70, 80, 90 in the area of influence 17 wirelessly, for example via direct communication technology such as DSRC or C-V2X, or via mobile communications. The communication unit 15 communicates current object lists by way of V2X messages to the vehicles in the area of influence 17 of the infrastructure system 10, preferably continuously or at regular intervals.”), determining if information transferred by way of V2X messages is reliable through checking the plausibility in relation to messages received earlier, a comparison with object lists generated by the surround sensor system on board the vehicle, latency times, and/or other criteria (see at least [0067] “In step 340, it is verified whether the information transferred by way of the V2X messages is reliable. Various criteria can be used for this purpose, for example the plausibility in relation to messages received earlier, a comparison with object lists generated by the surround sensor system on board the vehicle, latency times, and/or other criteria.”), when the V2X message is reliable deactivating the surround sensor system on board the vehicle in full or reducing the transmitting power of the surround sensor system on board the vehicle (see at least [0067] “If it is detected in step 340 that the information transferred by way of the V2X messages is reliable, then either the active surround sensor system on board the vehicle is deactivated in full, in accordance with step 352, or the transmit power of the active surround sensor system on board the vehicle is reduced, in accordance with step 354.”), and at least one automated driving function of the vehicle is implemented on the basis of surrounding data acquired by the active vehicle surround sensor and/or of the basis of data received from an infrastructure (see at least [0009] “According to a first aspect of the present invention, a method is provided for controlling a transmit power of at least one active vehicle surround sensor of a vehicle driven in an at least partially automated manner, at least one automated driving function of the vehicle being implemented on the basis of surroundings data acquired by the active vehicle surround sensor and/or on the basis of data received from an infrastructure.”).
Examiner interprets that reflection region (reflection area) is encompassed at least by area of influence, convergence delay time is encompassed at least by latency times and corrected positional information of an object is encompassed at least by verifying whether the information transferred by way of the V2X messages is reliable.
Yang et al. (US2023/0211776A1), hereinafter Yang, teaches estimating attributes of an obstacle by using V2X data and further fusing or replacing autonomous vehicle sensor data with roadside V2X data (see at least [0039]-[0042]).
Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings Yang of estimating attributes of an obstacle by using V2X data and further fusing or replacing autonomous vehicle sensor data with roadside V2X data and the teachings of Frye of an infrastructure system that transmits object lists, for example objects in the area of influence and their properties, in the form of a V2X message to vehicles in the area of influence, determining if information transferred by way of V2X messages is reliable through checking the plausibility in relation to messages received earlier, a comparison with object lists generated by the surround sensor system on board the vehicle, latency times, and/or other criteria, when the V2X message is reliable deactivating the surround sensor system on board the vehicle in full or reducing the transmitting power of the surround sensor system on board the vehicle, and at least one automated driving function of the vehicle is implemented on the basis of surrounding data acquired by the active vehicle surround sensor and/or of the basis of data received from an infrastructure in order to teach the specific technical feature of the presently claimed embodiments, in which the corrected positional information of an object and the reflection region of the infrastructure-edge system are considered, such that, if the object falls within the reflection region, the object recognized by the infrastructure-edge system is utilized for autonomous driving judgment, whereas if the object is recognized by the sensors of the autonomous vehicle, it is not utilized for such judgment as Frye teaches comparing object lists associated with a sensor onboard of a vehicle with object lists associated with an infrastructure system and utilizing information received from an infrastructure when information transferred by way of the V2X messages is reliable and Yang teaches further fusing or replacing autonomous vehicle sensor data with roadside V2X data. Therefore the 103 rejections are maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 7-8, 10, 12-15, and 17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 7 and 12 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a system for infrastructure dynamic object recognition information convergence processing in an autonomous vehicle and a method of infrastructure dynamic object recognition information convergence processing in an autonomous vehicle respectively.
Claim 7 recites the limitation “perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system”, “corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time”, and “considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object”, and Claim 12 recites “establishing a driving plan for autonomous driving in consideration of a convergence delay time which is a time difference between a time recorded according to reception of the second object information and a current time” and “wherein, in step (b), it is confirmed whether the location of the second object information on which the correction is performed in consideration of the convergence delay time is included in the reflection area of the InfraEdge system, and when the location of the second object information is included in the reflection area, the second object information is used to establish the driving plan” the limitation as drafted, is a process that, under the broadest reasonable interpretation, covers performance of limitations in the mind but for the recitation of generic computer components. That is other than reciting a computer nothing in the claims precludes the steps from being performed in the mind. For example, but for the recitation of a computer, these claims encompass a person observing data collected from a sensor and data collected by an InfraEdge system, processing/analyzing the data, and making a decision for driving based on the observed data. If a claim limitation, under broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the ”Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Claim 7 recites the additional elements “a recognition unit configured to receive object information collected by a sensor of the autonomous vehicle to recognize an environment” and “a V2X reception message processing unit configured to receive object information collected by an InfraEdge system”, and Claim 12 recites the additional elements “(a) collecting first object information collected by a sensor of an autonomous vehicle and second object information collected by an InfraEdge system” and “wherein, when considering a reflection area of the InfraEdge system, in step (a), in collecting the second object information, a message transmitted according to a message protocol additionally including the reflection area is received” which are recited at a high level of generality and amount to mere data gathering which is a form of insignificant extra-solution activity. Accordingly, the additional limitations do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claims 7 and 12 as a whole merely describe how to generally “apply” the concept of infrastructure dynamic object recognition information convergence processing. The claimed computer component is recited at a high generality and are merely invoked as a tool to perform an existing process. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, even in combination, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea. As such the claims are ineligible.
Claims 8, 10, 13-15, and 17 are also rejected as they do not recite additional elements that integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Further the recitation of an additional element the integrates the abstract idea into a practical application such as positively reciting a control step, for example, wherein the determination and control unit establish a driving plan using the object information collected by the sensor and the object information collected by the InfraEdge system, and generate and follow a local route (see at least Page 3 lines 3-5 of Applicant’s specification as filed) may help to overcome the 101 rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US2023/0211776A1) in view of Frye (US2023/0091772A1) in further view of Lim (US2020/0026290A1), hereinafter Yang, Frye, and Lim respectively.
Regarding claim 7, (Currently amended) Yang teaches a system for infrastructure dynamic object recognition information convergence processing in an autonomous vehicle, the system comprising: a recognition unit configured to receive object information collected by a sensor of the autonomous vehicle to recognize an environment (see at least [0094] “The first acquisition module 701 is configured to acquire vehicle-end data collected by at least one sensor of an autonomous driving vehicle.” also see at least [0029]-[0030]); a V2X reception message processing unit configured to receive object information collected by an InfraEdge system (see at least [0094] “The second acquisition module 702 is configured to acquire vehicle wireless communication V2X data transmitted by a roadside device.” also see at least [0033]-[0036]); and a determination and control unit configured to use the object information collected by the sensor and the object information collected by the InfraEdge system (see at least [0094] “The fusion module 703 is configured to fuse, in response to determining that an obstacle is at an edge of a blind spot of the autonomous driving vehicle, the vehicle-end data and the V2X data to obtain an attribute estimated value of the obstacle.” also see at least [0043]).
Examiner interprets recognition unit is encompassed at least by first acquisition module 701, V2X reception message processing unit is encompassed at least by second acquisition module 702, and determination and control unit is encompassed at least by fusion module 703.
Yang does not explicitly teach perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system, wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time, wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object.
However, Frye more explicitly teaches perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system (see at least [0053] “FIG. 1A shows an infrastructure system 10 for assisting at least partially automated driving of vehicles traveling in the area of influence 17 of the infrastructure system 10.” also see at least [0007], [0025], and [0053]). Frye suggests wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object (see at least [0067] “In step 340, it is verified whether the information transferred by way of the V2X messages is reliable. Various criteria can be used for this purpose, for example the plausibility in relation to messages received earlier, a comparison with object lists generated by the surround sensor system on board the vehicle, latency times, and/or other criteria. If it is detected in step 340 that the information transferred by way of the V2X messages is reliable, then either the active surround sensor system on board the vehicle is deactivated in full, in accordance with step 352, or the transmit power of the active surround sensor system on board the vehicle is reduced, in accordance with step 354. In this case, it is possible for the vehicle itself to measure the interference and to decide, on the basis of the result of the interference measurement, whether and to what extent the transmit power is reduced. The reduced transmit power can, for example, be selected such that the range is still adequate for emergency braking but more distant objects can only be received from the infrastructure.”) and wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time (see at least [0067] “In step 340, it is verified whether the information transferred by way of the V2X messages is reliable. Various criteria can be used for this purpose, for example the plausibility in relation to messages received earlier, a comparison with object lists generated by the surround sensor system on board the vehicle, latency times, and/or other criteria”).
Examiner interprets that reflection area of the InfraEdge system is encompassed at least by area of influence 17 of the infrastructure system 10, convergence delay time is encompassed at least by latency times, corrected positional information of an object is encompassed at least by verifying whether the information transferred by way of the V2X messages is reliable, and if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object is suggested by either the active surround sensor system on board the vehicle is deactivated in full, in accordance with step 352, or the transmit power of the active surround sensor system on board the vehicle is reduced.
However, Lim more explicitly teaches wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time, (see at least [0479]-[0480] “wherein the autonomous vehicle corrects position information included in the V2X message based on the value of the time.” also see at least [0430]).
Examiner interprets that convergence delay time is encompassed by the value of time.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Yang of a system for infrastructure dynamic object recognition information convergence processing in an autonomous vehicle, the system comprising: a recognition unit configured to receive object information collected by a sensor of the autonomous vehicle to recognize an environment; a V2X reception message processing unit configured to receive object information collected by an InfraEdge system; and a determination and control unit configured to use the object information collected by the sensor and the object information collected by the InfraEdge system with the teaching of perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system found in Frye, the suggested teaching of wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object and wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time found in Frye, and the teaching of wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time found in Lim. One could combine the teachings in order to have a system for infrastructure dynamic object recognition information convergence processing in an autonomous vehicle, the system comprising: a recognition unit configured to receive object information collected by a sensor of the autonomous vehicle to recognize an environment; a V2X reception message processing unit configured to receive object information collected by an InfraEdge system; and a determination and control unit configured to use the object information collected by the sensor and the object information collected by the InfraEdge system, and perform a determination for autonomous driving in consideration of a reflection area of the InfraEdge system, wherein the determination and control unit corrects a location of an object included in the object information collected by the InfraEdge system in consideration of the convergence delay time, which is a time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time, wherein the determination and control unit considers a corrected location of the object included in the object information collected by the InfraEdge system and the reflection area of the InfraEdge system, and when the location information of the corrected location of the object included in the object information collected by the InfraEdge system is included in the reflection area, if the object is a recognized object recognized by the InfraEdge system, uses the recognized object to determine the autonomous driving, and if it is an object recognized by the sensor of the autonomous vehicle, discards the recognized object with a reasonable expectation of success. One would have been motivated to do so in order to improve the speed of estimating the position of an obstacle and/or the accuracy of estimating the position of an obstacle (see at least Yang [0059])and further to improve driving safety (see at least Yang [0034] and [0042]-[0043]).
Regarding claim 14, (Original) the combination of Yang and Frye teaches the method of claim 12 as detailed below.
Yang does not explicitly teach wherein, in step (b), a location of an object included in the second object information is corrected in consideration of the convergence delay time and an overlap between a location of a corrected object and a location included in the first object information is calculated to confirm whether the object is the same object.
Lim teaches wherein, in step (b), a location of an object included in the second object information is corrected in consideration of the convergence delay time (see at least [0479]-[0480] “wherein the autonomous vehicle corrects position information included in the V2X message based on the value of the time.” also see at least [0430]).
Examiner interprets that convergence delay time is encompassed by the value of time.
Frye more explicitly teaches an overlap between a location of a corrected object and a location included in the first object information is calculated to confirm whether the object is the same object (see at least [0057] “it can be checked whether an object list communicated along with the V2X message 35 contains enough information to safely execute an at least partially automated driving function of the relevant vehicle 40, 50, 60, 70, 80, 90 without the measurement data acquired by the active vehicle surround sensors 22, 24 additionally being needed.” also see at least [0067]).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein, in step (b), a location of an object included in the second object information is corrected in consideration of the convergence delay time found in Lim and the teaching of an overlap between a location of a corrected object and a location included in the first object information is calculated to confirm whether the object is the same object found in Frye. One could combine the teachings in order to have a method wherein, in step (b), a location of an object included in the second object information is corrected in consideration of the convergence delay time and an overlap between a location of a corrected object and a location included in the first object information is calculated to confirm whether the object is the same object with a reasonable expectation of success. One would have been motivated to do so in order to improve the speed of estimating the position of an obstacle and/or the accuracy of estimating the position of an obstacle (see at least Yang [0059]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US2023/0211776A1) in view of Frye (US2023/0091772A1) in view of Lim (US2020/0026290A1) in further view of Hwang et al. (US2020/0021960A1), hereinafter Yang, Frye, Lim, and Hwang respectively.
Regarding claim 8, (Original) the combination of Yang, Frye, and Lim the system of claim 7 as detailed above.
Yang teaches wherein the V2X reception message processing unit receives a message transmitted according to a message protocol including a recognition information transmission time (see at least [0035] “The V2X data may include...data such as timestamps during transmission of the roadside device RSU.”) and recognition information (see at least [0035] “The V2X data may include attribute information such as position information, speed information of vehicles on the road,”).
Examiner interprets that recognition information is encompassed at least by attribute information.
Yang does not explicitly teach wherein the V2X reception message processing unit receives a message transmitted according to a message protocol including ID of the InfraEdge system, a recognition processing time, the reflection area, and recognition accuracy of the InfraEdge system.
Frye teaches wherein the V2X reception message processing unit receives a message transmitted according to a message protocol including the reflection area (see at least [0053] “The communication unit 15 communicates current object lists by way of V2X messages to the vehicles in the area of influence 17 of the infrastructure system 10, preferably continuously or at regular intervals.”).
Examiner interprets that reflection area is encompassed at least by the area of influence 17 of the infrastructure system 10.
Hwang teaches wherein the V2X reception message processing unit receives a message transmitted according to a message protocol including ID of the InfraEdge system (see at least [0161]-[0163] “FIG. 13 illustrates a first embodiment of a common container in a V2I message for a V2I service... Referring to FIG. 13, the common container may include ID related information... In an embodiment, ID related information may include... station ID (Stationed) information,” also see at least [0165]), a recognition processing time (see at least [0162] “Referring to FIG. 13, the common container may include...event related information,” and [0168] “the event related information may include...valid duration (validityDur) information.” also see at least [0170]), and recognition accuracy of the InfraEdge system (see at least [0162] “Referring to FIG. 13, the common container may include...position related information and/or lane related information.” and [0171] “the position related information may include reference position (refPos) information, position accuracy (posAcc) information, heading information and/or heading reliability (HeadingConf) information.” also see at least [0174] and [0176]).
Examiner interprets ID of the InfraEdge system is encompassed at least by station ID (StationID), recognition processing time is encompassed at least by valid duration (validityDur), and recognition accuracy of the InfraEdge system is encompassed at least by position accuracy (posAcc) and/or heading reliability (HeadingConf).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein the V2X reception message processing unit receives a message transmitted according to a message protocol including the reflection area found in Frye with a reasonable expectation of success. One would have been motivated to do so in order to provide correct surrounding data to a vehicle in the event of degraded or incorrect vehicle sensor data (see at least Frye, [0007]). It would have also been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein the V2X reception message processing unit receives a message transmitted according to a message protocol including ID of the InfraEdge system, a recognition processing time, and recognition accuracy of the InfraEdge system found in Hwang with a reasonable expectation of success. One would have been motivated to do so in order to help a vehicle to report a detected object in time, reduce vehicle uncertainty and further to improve driving safety (see at least Yang [0034] and [0042]-[0043]).
Claims 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US2023/0211776A1) in view of Frye (US2023/0091772A1) in view of Lim (US2020/0026290A1) in view of Hwang et al. (US2020/0021960A1) in further view of Lee et al. (US2022/0332327A1), hereinafter Yang, Frye, Lim, Hwang, and Lee respectively.
Regarding claim 10, (Currently amended) the combination of Yang, Frye, Lim, and Hwang teaches the system of claim 8 as detailed above.
Yang does not explicitly teach wherein the determination and control unit considers location information of a corrected object and the reflection area of the InfraEdge system to calculate an overlap between the location information of the corrected object and location information of in the object information collected by the sensor of the autonomous vehicle when the location information of the corrected object and the location information of the collected object are not included in the reflection area and confirm whether the object is the same object, and uses the corresponding object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system to determine the autonomous driving when it is confirmed that the object is not the same and selects a final object based on recognition accuracy when it is confirmed that the object is the same object.
Frye teaches wherein the determination and control unit considers location information of a corrected object and the reflection area of the InfraEdge system to calculate an overlap between the location information of the corrected object and location information in the object information collected by the sensor of the autonomous vehicle (see at least [0057] “it can be checked whether an object list communicated along with the V2X message 35 contains enough information to safely execute an at least partially automated driving function of the relevant vehicle 40, 50, 60, 70, 80, 90 without the measurement data acquired by the active vehicle surround sensors 22, 24 additionally being needed.” also see at least [0067]).
Lee more explicitly teaches when the location information of the corrected object and the location information of the collected object are not included in the reflection area and confirm whether the object is the same object (see at least [0073] “Referring again to FIG. 1, after step 120, whether the shared sensor fusion information and the host vehicle sensor fusion information are information about the same object is inspected (step 130). To this end, the object identicality inspector 243 may receive the shared sensor fusion information converted to the host vehicle coordinate system by the coordinate converter 241 and the host vehicle sensor fusion information provided from the communicator 222, may inspect whether the shared sensor fusion information and the host vehicle sensor fusion information are information about the same object,” and [0074] “FIGS. 4A to 4C are diagrams for helping understand step 130 shown in FIG. 1, wherein CAS represents shared sensor fusion information selected as a candidate,”), and uses the information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system to determine the autonomous driving when it is confirmed that the object is not the same (see at least [0101] “when step 140 is performed even when the shared sensor fusion information and the host vehicle sensor fusion information are not information about the same object, that is, when the shared sensor fusion information has not been selected as any one of the primary candidate, the secondary candidate, and the tertiary candidate, the order number of selections as the candidate identified in step 147, i.e. TN in Equation 3, may be taken as a default.”) and selects a final object based on recognition accuracy when it is confirmed that the object is the same object (see at least [0103]-[0104] “Referring again to FIG. 1, after step 140, based on the estimated reliability, fusion track information of an object located near at least one of the host vehicle or the other vehicle is generated using the host vehicle sensor fusion information and the shared sensor fusion information (step 150)...For example, the shape of an object may be estimated to be a rectangular box shape, and the fusion track information thereof may be generated using the shared sensor fusion information and the host vehicle sensor fusion information, which are estimated to have reliability greater than or equal to a threshold value.”).
Examiner interprets that reflection area is encompassed at least by CAS.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein the determination and control unit considers location information of a corrected object and the reflection area of the InfraEdge system to calculate an overlap between the location information of the corrected object and location information in the object information collected by the sensor of the autonomous vehicle found in Frye and the teaching of when the location information of the corrected object and the location information of the collected object are not included in the reflection area and confirm whether the object is the same object, and uses the object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system to determine the autonomous driving when it is confirmed that the object is not the same and selects a final object based on recognition accuracy when it is confirmed that the object is the same object found in Lee. One could combine the teachings in order to have a system wherein the determination and control unit considers location information of a corrected object and the reflection area of the InfraEdge system to calculate an overlap between the location information of the corrected object and location information in the object information collected by the sensor of the autonomous vehicle when the location information of the corrected object and the location information of the collected object are not included in the reflection area and confirm whether the object is the same object, and uses the corresponding object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system to determine the autonomous driving when it is confirmed that the object is not the same and selects a final object based on recognition accuracy when it is confirmed that the object is the same object with a reasonable expectation of success. One would have been motivated to do so in order to improve the speed of estimating the position of an obstacle and/or the accuracy of estimating the position of an obstacle (see at least Yang [0059]).
Regarding claim 17, (Currently amended) the combination of Yang, Frye, and Hwang teaches the method of claim 13 as detailed below.
Yang does not explicitly teach wherein, in step (b), it is confirmed whether the location of the second object information on which correction is performed is included in the reflection area of the InfraEdge system in consideration of the convergence delay time, and when the location of the second object information is not included in the reflection area, an overlap between the location of the second object information on which the correction is performed and the location of the first object information is calculated to confirm whether the object is the same object, when it is confirmed that the object is not the same, a corresponding object is used to establish the driving plan, and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy.
Lim teaches wherein, in step (b), it is confirmed whether the location of the second object information on which correction is performed is included in the reflection area of the InfraEdge system in consideration of the convergence delay time (see at least [0479]-[0480] “wherein the autonomous vehicle corrects position information included in the V2X message based on the value of the time.” also see at least [0430]).
Examiner interprets that convergence delay time is encompassed by the value of time.
Lee more explicitly teaches when the location of the second object information is not included in the reflection area, an overlap between the location of the second object information on which the correction is performed and the location of the first object information is calculated to confirm whether the object is the same object (see at least [0073] “Referring again to FIG. 1, after step 120, whether the shared sensor fusion information and the host vehicle sensor fusion information are information about the same object is inspected (step 130). To this end, the object identicality inspector 243 may receive the shared sensor fusion information converted to the host vehicle coordinate system by the coordinate converter 241 and the host vehicle sensor fusion information provided from the communicator 222, may inspect whether the shared sensor fusion information and the host vehicle sensor fusion information are information about the same object,” and [0074] “FIGS. 4A to 4C are diagrams for helping understand step 130 shown in FIG. 1, wherein CAS represents shared sensor fusion information selected as a candidate,”), when it is confirmed that the object is not the same, a corresponding object is used to establish the driving plan (see at least [0101] “when step 140 is performed even when the shared sensor fusion information and the host vehicle sensor fusion information are not information about the same object, that is, when the shared sensor fusion information has not been selected as any one of the primary candidate, the secondary candidate, and the tertiary candidate, the order number of selections as the candidate identified in step 147, i.e. TN in Equation 3, may be taken as a default.”), and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy (see at least [0103]-[0104] “Referring again to FIG. 1, after step 140, based on the estimated reliability, fusion track information of an object located near at least one of the host vehicle or the other vehicle is generated using the host vehicle sensor fusion information and the shared sensor fusion information (step 150)...For example, the shape of an object may be estimated to be a rectangular box shape, and the fusion track information thereof may be generated using the shared sensor fusion information and the host vehicle sensor fusion information, which are estimated to have reliability greater than or equal to a threshold value.”).
Examiner interprets that reflection area is encompassed at least by CAS.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein, in step (b), it is confirmed whether the location of the second object information on which correction is performed is included in the reflection area of the InfraEdge system in consideration of the convergence delay time found in Lim and the teaching of when the location of the second object information is not included in the reflection area, an overlap between the location of the second object information on which the correction is performed and the location of the first object information is calculated to confirm whether the object is the same object, when it is confirmed that the object is not the same, the a corresponding object is used to establish the driving plan, and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy found in Lee. One could combine the teachings in order to have a method wherein, in step (b), it is confirmed whether the location of the second object information on which correction is performed is included in the reflection area of the InfraEdge system in consideration of the convergence delay time, and when the location of the second object information is not included in the reflection area, an overlap between the location of the second object information on which the correction is performed and the location of the first object information is calculated to confirm whether the object is the same object, when it is confirmed that the object is not the same, a corresponding object is used to establish the driving plan, and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy with a reasonable expectation of success. One would have been motivated to do so in order to improve the speed of estimating the position of an obstacle and/or the accuracy of estimating the position of an obstacle (see at least Yang [0059]).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US2023/0211776A1) in view of Frye (US2023/0091772A1), hereinafter Yang and Frye respectively.
Regarding claim 12, (Currently amended) Yang teaches a method of infrastructure dynamic object recognition information convergence processing in an autonomous vehicle, the method comprising: (a) collecting first object information collected by a sensor of an autonomous vehicle (see at least [0094] “The first acquisition module 701 is configured to acquire vehicle-end data collected by at least one sensor of an autonomous driving vehicle.” also see at least [0029]-[0030]) and second object information collected by an InfraEdge system (see at least [0094] “The second acquisition module 702 is configured to acquire vehicle wireless communication V2X data transmitted by a roadside device.” also see at least [0033]-[0036]).
Yang does not explicitly teach (b) establishing a driving plan for autonomous driving in consideration of a convergence delay time which is a time difference between a time recorded according to reception of the second object information and a current time, wherein, when considering a reflection area of the InfraEdge system, in step (a), in collecting the second object information, a message transmitted according to a message protocol additionally including the reflection area is received, and wherein, in step (b), it is confirmed whether the location of the second object information on which the correction is performed in consideration of the convergence delay time is included in the reflection area of the InfraEdge system, and when the location of the second object information is included in the reflection area, the second object information is used to establish the driving plan.
Frye more explicitly teaches wherein, when considering a reflection area of the InfraEdge system, in step (a), in collecting the second object information, a message transmitted according to a message protocol additionally including the reflection area is received (see at least [0053] “The communication unit 15 communicates current object lists by way of V2X messages to the vehicles in the area of influence 17 of the infrastructure system 10, preferably continuously or at regular intervals.”) and b) establishing a driving plan for autonomous driving in consideration of a convergence delay time which is a time difference between a time recorded according to reception of the second object information and a current time (see at least [0009] “controlling a transmit power of at least one active vehicle surround sensor of a vehicle driven in an at least partially automated manner, at least one automated driving function of the vehicle being implemented on the basis of surroundings data acquired by the active vehicle surround sensor and/or on the basis of data received from an infrastructure.” also see at least [0067] “In step 340, it is verified whether the information transferred by way of the V2X messages is reliable. Various criteria can be used for this purpose, for example...latency times...If it is detected in step 340 that the information transferred by way of the V2X messages is reliable, then either the active surround sensor system on board the vehicle is deactivated in full, in accordance with step 352, or the transmit power of the active surround sensor system on board the vehicle is reduced, in accordance with step 354.”), and wherein, in step (b), it is confirmed whether the location of the second object information on which the correction is performed in consideration of the convergence delay time is included in the reflection area of the InfraEdge system, and when the location of the second object information is included in the reflection area, the second object information is used to establish the driving plan (see at least [0025] “Similarly, it can also be identified that the vehicle is approaching an area of influence of an infrastructure so that preparatory steps for implementing an automated driving function of the vehicle, based on data yet to be received from the infrastructure, may already be initiated before the first message is received from the infrastructure.” also see at least [0053]).
Examiner interprets that reflection area is encompassed at least by the area of influence 17 of the infrastructure system 10 and time difference between a time recorded according to message reception of the V2X reception message processing unit and a current time is encompassed at least by latency times.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Yang of a method of infrastructure dynamic object recognition information convergence processing in an autonomous vehicle, the method comprising: (a) collecting first object information collected by a sensor of an autonomous vehicle and second object information collected by an InfraEdge system with the teaching of (b) establishing a driving plan for autonomous driving in consideration of a convergence delay time which is a time difference between a time recorded according to reception of the second object information and a current time, wherein, when considering a reflection area of the InfraEdge system, in step (a), in collecting the second object information, a message transmitted according to a message protocol additionally including the reflection area is received, and wherein, in step (b), it is confirmed whether the location of the second object information on which the correction is performed in consideration of the convergence delay time is included in the reflection area of the InfraEdge system, and when the location of the second object information is included in the reflection area, the second object information is used to establish the driving plan found in Frye. One could combine the teachings in order to have a method of infrastructure dynamic object recognition information convergence processing in an autonomous vehicle, the method comprising: (a) collecting first object information collected by a sensor of an autonomous vehicle and second object information collected by an InfraEdge system; and (b) establishing a driving plan for autonomous driving in consideration of a convergence delay time which is a time difference between a time recorded according to reception of the second object information and a current time, wherein, when considering a reflection area of the InfraEdge system, in step (a), in collecting the second object information, a message transmitted according to a message protocol additionally including the reflection area is received, and wherein, in step (b), it is confirmed whether the location of the second object information on which the correction is performed in consideration of the convergence delay time is included in the reflection area of the InfraEdge system, and when the location of the second object information is included in the reflection area, the second object information is used to establish the driving plan with a reasonable expectation of success. One would have been motivated to do so in order to help a vehicle to report a detected object in time, reduce vehicle uncertainty and further to improve driving safety (see at least Yang [0034] and [0042]-[0043]). One would have also been motivated to do so in order to provide correct surrounding data to a vehicle in the event of degraded or incorrect vehicle sensor data (see at least Frye, [0007]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US2023/0211776A1) in view of Frye (US2023/0091772A1) in further view of Hwang et al. (US2020/0021960A1), hereinafter Yang, Frye, and Hwang respectively.
Regarding claim 13, (Currently amended) the combination of Yang and Frye teaches he method of claim 12 as detailed above.
Regarding claim 13, (Currently amended) the combination of Yang and Frye teaches he method of claim 12 as detailed above.
Yang teaches wherein, in step (a), when the second object information is collected, the message transmitted according to the message protocol including a recognition information transmission time (see at least [0035] “The V2X data may include...data such as timestamps during transmission of the roadside device RSU.”) and recognition information of the InfraEdge system is received (see at least [0035] “The V2X data may include attribute information such as position information, speed information of vehicles on the road,”).
Examiner interprets that recognition information is encompassed at least by attribute information.
Yang does not explicitly teach wherein, in step (a), when the second object information is collected, the message transmitted according to the message protocol including ID, a recognition processing time, and recognition accuracy of the InfraEdge system is received.
Hwang more explicitly teaches wherein, in step (a), when the second object information is collected, the message transmitted according to the message protocol including ID (see at least [0161]-[0163] “FIG. 13 illustrates a first embodiment of a common container in a V2I message for a V2I service... Referring to FIG. 13, the common container may include ID related information... In an embodiment, ID related information may include... station ID (StationID) information,” also see at least [0165]), a recognition processing time (see at least [0162] “Referring to FIG. 13, the common container may include...event related information,” and [0168] “the event related information may include...valid duration (validityDur) information.” also see at least [0170]), and recognition accuracy of the InfraEdge system is received (see at least [0162] “Referring to FIG. 13, the common container may include...position related information and/or lane related information.” and [0171] “the position related information may include reference position (refPos) information, position accuracy (posAcc) information, heading information and/or heading reliability (HeadingConf) information.” also see at least [0174] and [0176]).
Examiner interprets ID of the InfraEdge system is encompassed at least by station ID (StationID), recognition processing time is encompassed at least by valid duration (validityDur), and recognition accuracy of the InfraEdge system is encompassed at least by position accuracy (posAcc) and/or heading reliability (HeadingConf).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein, in step (a), when the second object information is collected, the message transmitted according to the message protocol including ID, a recognition processing time, and recognition accuracy of the InfraEdge system is received found in Hwang with a reasonable expectation of success. One would have been motivated to do so in order to help a vehicle to report a detected object in time, reduce vehicle uncertainty and further to improve driving safety (see at least Yang [0034] and [0042]-[0043]).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US2023/0211776A1) in view of Frye (US2023/0091772A1) in view of Lim (US2020/0026290A1) in further view of Lee et al. (US2022/0332327A1), hereinafter Yang, Frye, Lim, and Lee respectively.
Regarding claim 15, (Currently Amended) the combination of Yang, Frye, and Lim teaches the method of claim 14 as detailed above.
Yang does not explicitly teach wherein, in step (b), when it is confirmed that the object is not the same, the object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system are used to establish the driving plan, and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy.
Lee teaches wherein, in step (b), when it is confirmed that the object is not the same, the object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system are used to establish the driving plan (see at least [0101] “when step 140 is performed even when the shared sensor fusion information and the host vehicle sensor fusion information are not information about the same object, that is, when the shared sensor fusion information has not been selected as any one of the primary candidate, the secondary candidate, and the tertiary candidate, the order number of selections as the candidate identified in step 147, i.e. TN in Equation 3, may be taken as a default.”), and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy (see at least [0103]-[0104] “Referring again to FIG. 1, after step 140, based on the estimated reliability, fusion track information of an object located near at least one of the host vehicle or the other vehicle is generated using the host vehicle sensor fusion information and the shared sensor fusion information (step 150)...For example, the shape of an object may be estimated to be a rectangular box shape, and the fusion track information thereof may be generated using the shared sensor fusion information and the host vehicle sensor fusion information, which are estimated to have reliability greater than or equal to a threshold value.”).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Yang with the teaching of wherein, in step (b), when it is confirmed that the object is not the same, the object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system are used to establish the driving plan, and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy found in Lee. One could combine the teachings in order to have a method wherein, in step (b), when it is confirmed that the object is not the same, the object information collected by a sensor of the autonomous vehicle and the object information collected by an InfraEdge system are used to establish the driving plan, and when it is confirmed that the object is the same object, a final object is selected based on the recognition accuracy. One would have been motivated to do so in order to improve the speed of estimating the position of an obstacle and/or the accuracy of estimating the position of an obstacle (see at least Yang [0059]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Li et al. (US2024/0259989A1) Discloses a vehicle-road cooperative positioning method and an on-board positioning system, which relate to the technical field of Internet of vehicles. The method includes: receiving a first vehicle to everything (V2X) message sent by at least one road side unit (RSU), where the first V2X message carries location information of the RSU; determining first positioning information between a vehicle and the RSU based on a first parameter, where the first positioning information includes first distance information, or includes first distance information and first angle information, and the first parameter is determined by measuring a signal transmitted on a V2X sidelink; and determining a location of the vehicle according to the first positioning information and the location information of the RSU.
Sanberg et al. (US2023/0176183A1) Discloses a method, system, apparatus, and architecture for validating one or more sensors by collecting and processing sensor data signals from on-board vehicle sensors to generate a local environmental map identifying one or more first traffic features located in an exterior environment of the vehicle, by collecting and processing sensor data signals from a remote traffic participant to generate an external environmental map identifying one or more second traffic features located in an exterior environment of the remote traffic participant, and by performing a diagnostic sensor cross check on on-board sensors by comparing the local environmental map with at least the external environmental map to detect any discrepancy between the one or more first traffic features and the one or more second traffic features which indicates that one or more of the on-board sensors is defective.
Son (US2021/0279481A1) Discloses a driver assistance apparatus includes a global positioning system (GPS) module configured to obtain position data of a vehicle; a Light Detection And Ranging (LiDAR) installed in the vehicle to have an external field of view of the vehicle, and configured to obtain first image data for the external field of view of the vehicle; a communication interface configured to receive second image data obtained by an external LiDAR disposed at a position different from the vehicle; and a controller including at least one processor configured to process the first image data and the second image data. The controller may be configured to compare the first image data and the second image data, and to correct the position data when an error occurs as a result of the comparison.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSSA N RORIE whose telephone number is (571)272-6962. The examiner can normally be reached Monday - Friday (out of office every other Friday) 7:30 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.R./Examiner, Art Unit 3662
/JELANI A SMITH/Supervisory Patent Examiner, Art Unit 3662