DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner's Note
Examiner has cited particular paragraphs / columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to Applicants’ definition which is not specifically set forth in the claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite obtaining, identifying, generating, and transmitting . This judicial exception is not integrated into a practical application or significantly more because the implementation is a generic application of an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Subject Matter Eligibility Analysis of representative claim 1 (see MPEP 2106.03):
Step 1: As a method, the claim is directed to a statutory category.
Step 2A: Prong 1: Claim 1 is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 is directed to:
obtaining a plurality of contextual features of a driving environment based on knowledge related to the driving environment obtained using sensor data collected by a first vehicle in the driving environment; identifying a plurality of nodes of a vehicular knowledge network based on the plurality of contextual features, wherein each of the plurality of nodes comprises node knowledge associated with a respective subset of the plurality of contextual features; generating merged knowledge by combining the first knowledge with the node knowledge of at least one of the plurality of nodes; and transmitting the merged knowledge to a second vehicle, wherein the second vehicle performs a vehicular operation based on the merged knowledge.
These limitations recites a concept that falls into the “mental process” group of abstract ideas. Using obtained contextual information around a vehicle to identify nodes of knowledge such that the respective subsets of node knowledge could be merged could be done in the human mind or with the aid of paper (see MPEP 2106.04(a)(2)(III). An akin example would be a driver driving passed a sign reducing the speed limit then later seeing orange cones and merging the information into a finding that a construction zone is beginning.
Step 2A: Prong 2: The Applicant does recite the additional elements including (1) obtaining a plurality of contextual features of a driving environment based on knowledge related to the driving environment obtained using sensor data collected by a first vehicle in the driving environment and (2) “transmitting the merged knowledge to a second vehicle, wherein the second vehicle performs a vehicular operation based on the merged knowledge” are claimed as additional elements in conjunction with the abstract idea. However the additional elements are (1) insignificant pre- or post-solution activity and (2) the additional limitation is claimed generally at an “apply it” level and is well-known in the art and therefore this does not integrate the judicial exception into a practical application.
The Applicant has recited a claim wherein (1) obtaining a plurality of contextual features of a driving environment based on knowledge related to the driving environment obtained using sensor data collected by a first vehicle in the driving environment and (2) “transmitting the merged knowledge to a second vehicle, wherein the second vehicle performs a vehicular operation based on the merged knowledge” are claimed as additional elements in conjunction with the abstract idea.
Here, (1) obtaining the data from sensors and (2) transmitting the results of a mental process are insignificant pre- and post-solution activity because “the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” MPEP § 2106.05(g). (1) Gathering data and (2) outputting data from the mental activity of merging knowledge is a required step that does not meaningful limit the judicial exception. (1) Collecting data for knowledge-based operations would be required in any implementation of the claimed abstract idea. And, (2) given its broadest reasonable interpretation, “performing a vehicular operation based on the merged knowledge” could include further merging knowledge or display of results at the second this vehicle. Therefore, under the broadest reasonable interpretation, the second additional element could (a) be additional mental process or (b) additional insignificant post-solution activity. Consequently both additional elements fail to integrate the abstract idea into a practical application.
Further, both (1) obtaining a plurality of contextual features of a driving environment based on knowledge related to the driving environment obtained using sensor data collected by a first vehicle in the driving environment and (2) “transmitting the merged knowledge to a second vehicle, wherein the second vehicle performs a vehicular operation based on the merged knowledge” are claimed generally at an “apply it” level of generality. (1) The claimed sensor data gathering does not specify particular sensors or even vehicle specific sensors. (2) Data transmission and could be facilitated over any number of generalized technologies including WiFi, 5G, or 802.11(p) (V2V communication). The additional elements are claimed generally and do not recite “improvements in the functioning of a computer or an improvement to any other technology (MPEP § 2106.04(d)(1)) and therefore do not integrate the judicial exception into a practical application. Rather, the additional elements are claimed in such a way as to be generally linking to a particular technological environment without integration into a practical application. Similarly, “performing a vehicular operation based on the merged knowledge” is claimed at an apply it level without a specific positive control step. Therefore, neither of the additional elements integrate the abstract idea into a practical application.
If post-solution communication are not "incidental to the primary process or product [or] merely a nominal or tangential addition to the claim" (MPEP § 2106.05(g) they may integrate the claim into a practical application; however, this is not the case in the instant application.
Step 2B: The claim does not recites an element or combination of elements that is unconventional or significantly more than its individual elements. “[A]n ‘inventive concept’ is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. (MPEP § 2106.05 citing Alice Corp., 573 U.S. at 27-18, 110 USPQ2d at 1981 (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, at 72-73)). “Evaluating additional elements to determine whether they amount to an inventive concept requires considering them both individually and in combination to ensure that they amount to significantly more than the judicial exception itself” (MPEP § 2106.05). The claim recites the additional elements including:
The Applicant has recited a claim wherein (1) obtaining a plurality of contextual features of a driving environment based on knowledge related to the driving environment obtained using sensor data collected by a first vehicle in the driving environment and (2) “transmitting the merged knowledge to a second vehicle, wherein the second vehicle performs a vehicular operation based on the merged knowledge” are claimed as additional elements in conjunction with the abstract idea. However, the limitations are recited such that the Applicant is merely adding well understood and conventional steps in the art on how to apply the judicial exception.
The first additional element, (1) sensor-based data gathering is commonly applied to vehicles, is well-understood and conventional in the art, and is merely claimed at an “apply it” level of detail. (2) The second additional element does not claim, recite or detail behavior beyond general description of standard data transmission because it is claimed at an “apply it” level of detail that does not meet the test for “significantly more” (MPEP § 2106.05(I)(A) (see MPEP § 2106.05(f))). Similarly, “performing a vehicular operation based on the merged knowledge” is claimed at an apply it level without a positive control step, so this limitation also fails to meet the test for “significantly more” Accordingly, these additional elements do not integrate the abstract idea into significantly more than abstract idea but rather would monopolize the abstract idea.
Finally, as discussed with respect to Step 2A: Prong 2, (1) data gathering from sensors and (2) transmitting the results of a mental process is insignificant post-solution activity because “the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).” MPEP § 2106.05(g). Outputting data from the mental activity of merging knowledge is a required step that does not meaningful limit the judicial exception. Also as discussed supra, given its broadest reasonable interpretation, “performing a vehicular operation based on the merged knowledge” could include merging knowledge or displaying the results at the second this vehicle. The first instance would be additional mental process while the second would still be insignificant post-solution activity, so this limitation also fails to integrate the abstract idea into significantly more.
Regarding the further claims:
Claim 2 does not cure the deficiencies of claim 1 because claim 2 is still drawn to the “mental process” group of abstract ideas as it merely further claims the abstract idea by further limiting merging knowledge and does not include an extra-solution activity or additional elements.
Claim 3 does not cure the deficiencies of claim 1 because claim 3 is still drawn to the “mental process” group of abstract ideas as it merely further claims the abstract idea by further limiting contextual features applied and does not include an extra-solution activity or additional elements.
Claim 4 does not cure the deficiencies of claim 1 because claim 4 is still drawn to the “mental process” group of abstract ideas as it merely further claims the abstract idea by further limiting the type of data analysis and does not include an extra-solution activity or additional elements. Time series analysis is broadly claimed and could be interpreted as simply the order of events which is withing the mental capability of human observer.
Claim 5 does not cure the deficiencies of claim 1 because claim 5 is still drawn to the “mental process” group of abstract ideas as it merely further claims the abstract idea by further limiting vehicular knowledge network and does not include an extra-solution activity or additional elements..
Claim 6 does not cure the deficiencies of claim 1 because claim 6 is still drawn to the “mental process” group of abstract ideas as it merely further defines the abstract idea by limiting identifying a plurality of nodes and does not include an extra-solution activity or additional elements.
Claims 7 and 8 do not cure the deficiencies of their dependent claims because the claims are still drawn to the “mental process” group of abstract ideas as it merely further limits “generating the merged knowledge by combining the first knowledge with the node knowledge” aspect of the abstract idea and does not include an extra-solution activity or additional elements.
Claim 9 does not cure the deficiencies of claim 1 because claim 9 is still drawn to the “mental process” group of abstract ideas as it merely further describes the abstract idea and does not include an extra-solution activity or additional elements.
Claim 10 does not cure the deficiencies of claim 9 because claim 10 is still drawn to the “mental process” group of abstract ideas as it merely further limits the “knowledge refinement criteria” aspect of the mental process and does not include an extra solution activity or instructions on how to apply the abstract idea.
Claims 11 and 19 are rejected for reasons parallelling claim 1. The addition of establishing a communication path is post-solution extra solution activity and does not integrate the abstract idea into a practical application or significantly more.
Claims 12-15 and 17-18 are rejected for reasons parallelling claims 2-5 and 9-10 respectively.
Claim 16 does not cure the deficiencies of claim 11 because claim 16 is still drawn to the “mental process” group of abstract ideas as it merely adds an additional insignificant post-solution data output and does not include an extra solution activity or instructions on how to apply the abstract idea.
Therefore, the claims do not amount to significantly more than the abstract idea and have been rejected under 35 USC 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and § 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-3 and 5-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim et al. (US 20200256681 A1).
Regarding claim 1, Kim discloses a method comprising:
obtaining a plurality of contextual features of a driving environment based on knowledge related to the driving environment obtained using sensor data collected by a first vehicle in the driving environment; (Kim: ¶ 392; dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle. For example, a sign, a traffic light, a vehicle involved with an accident, and the like may be set as dynamic objects. The dynamic object includes at least one of an identification number of an object, a kind of an object, a size of an object, a shape of an object, and location information (e.g., latitude, longitude, altitude) of an object.) identifying a plurality of nodes of a vehicular knowledge network based on the plurality of contextual features, wherein each of the plurality of nodes comprises node knowledge associated with a respective subset of the plurality of contextual features; (Kim: ¶ 035; one or more nodes include a first node for a first section and a second node for a second section, and wherein the processor is configured to based on the vehicle being located in the first section, match the first path with the second path using the first node and based on the vehicle being located in the second section, match the first path with the second path using the second node.) generating merged knowledge by combining the first knowledge with the node knowledge of at least one of the plurality of nodes; and (Kim: ¶¶ 390-392; device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon) . . . a first map having . . . a plurality of dynamic objects sensed by at least one electric component. Here, the dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle.) (Kim: ¶ 346; processor 870 may merge the acquired location information of the vehicle and the received location information of the another vehicle into the received map information) transmitting the merged knowledge to a second vehicle, wherein the second vehicle performs a vehicular operation based on the merged knowledge. (Kim: ¶¶ 375-376; generate a merged map . . .[t]hen, the processor 870 may transmit the redefined V2X message to the another vehicle)
Regarding claim 2, as detailed above, Kim teaches the invention as detailed with respect to claim 1. Kim further teaches:
wherein generating merged knowledge by combining the first knowledge with the node knowledge of at least one of the plurality of nodes comprises: generating merged knowledge by combining the first knowledge with the node knowledge of each of the plurality of nodes. (Kim: ¶¶ 390-392; device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon) . . . a first map having . . . a plurality of dynamic objects sensed by at least one electric component. Here, the dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle.) (Kim: ¶ 346; processor 870 may merge the acquired location information of the vehicle and the received location information of the another vehicle into the received map information)
Regarding claim 3, as detailed above, Kim discloses the invention as detailed with respect to claim 1. Kim further discloses:
wherein the plurality of contextual features comprises at least one of static properties (Kim: ¶ 398; second map provides location information (x2, y2, z2) for the same traffic light, which information has to be used is a matter.) and dynamic properties obtained from the driving environment. (Kim: ¶¶ 390-392; device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon) . . . a first map having . . . a plurality of dynamic objects sensed by at least one electric component. Here, the dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle.)
Regarding claim 5, as detailed above, Kim discloses the invention as detailed with respect to claim 1. Kim further discloses:
wherein the plurality of nodes of the vehicular knowledge network collectively comprise the plurality of contextual features. (Kim: ¶ 390; map providing device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon), and provides the merged map to the electric components) (Kim: Fig. 012)
Regarding claim 6, as detailed above, Kim discloses the invention as detailed with respect to claim 1. Kim further discloses:
wherein identifying a plurality of nodes of a vehicular knowledge network comprises: identifying a path through the vehicular knowledge network that traverses the plurality of nodes. (Kim: ¶ 379; can control the vehicle using the ADAS MAP (map information, highly detailed MAP) in which the relative location between the vehicle and the another vehicle is aligned in the lane unit)
Regarding claim 7, as detailed above, Kim discloses the invention as detailed with respect to claim 6. Kim further discloses:
wherein generating the merged knowledge by combining the first knowledge with the node knowledge of each of the plurality of nodes comprises: successively combining the first knowledge with each node of the plurality of nodes while traversing the path through the vehicular knowledge network. (Kim: ¶ 035; the one or more nodes include a first node for a first section and a second node for a second section, and wherein the processor is configured to based on the vehicle being located in the first section, match the first path with the second path using the first node and based on the vehicle being located in the second section, match the first path with the second path using the second node.)
Regarding claim 8, as detailed above, Kim discloses the invention as detailed with respect to claim 1. Kim further discloses:
wherein generating the merged knowledge by combining the first knowledge with the node knowledge of each of the plurality of nodes comprises: aggregating the first knowledge with node knowledge of a first node of the plurality of nodes to generate first intermediate-merged knowledge; and aggregating the first intermediate-merged knowledge with node knowledge of a second node of the plurality of nodes to generate second intermediate-merged knowledge. (Kim: ¶ 380; may calculate a relative location (network) between the vehicle and the another vehicle using the location information of the another vehicle received from the another vehicle through V2X communication. Then, the calculated relative location information may be aligned in the lane unit on the highly detailed MAP received from the external server (eHorizon).)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 20200256681 A1) as applied to claims 1 and 11 respective above, and further in view of Takeyasu (US 20230386340 A1).
Regarding claim 4, as detailed above, Kim teaches the invention as detailed with respect to claim 1. Kim does not explicitly teach:
wherein inferring the plurality of contextual features comprises: executing time series analysis on the sensor data to derive the plurality of contextual features; however, Takeyasu does teach:
►wherein inferring the plurality of contextual features comprises: executing time series analysis on the sensor data to derive the plurality of contextual features. (Takeyasu: ¶ 076; estimation time determination unit 151 is a function unit to determine a time range and a time interval to generate a traffic condition map.) (Takeyasu: ¶ 009; calculate a peripheral object distribution indicating an object existence range where there is a possibility for each object included in a peripheral object group constituted of at least one object existing around a target moving object to exist in an estimation time range,) (Takeyasu: ¶ 215; traffic condition map generation unit 153 generates a traffic condition map in a target time range by merging existence probability maps corresponding to each peripheral object obtained)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Takeyasu with the teachings of Kim because doing so would result in the predicable benefit of "making it possible to consider, when a delay in driving instruction from a driving assistance device to a vehicle occurs, [a] change in a traffic condition that may occur during the delay" (Takeyasu: ¶ 008).
Regarding claim 13, as detailed above, Kim teaches the invention as detailed with respect to claim 11. Kim does not explicitly teach:
wherein deriving the plurality of contextual features comprises: executing time series analysis on the sensor data to derive the plurality of contextual features; however, Takeyasu does teach:
wherein deriving the plurality of contextual features comprises: executing time series analysis on the sensor data to derive the plurality of contextual features. (Takeyasu: ¶ 076; estimation time determination unit 151 is a function unit to determine a time range and a time interval to generate a traffic condition map.). (Takeyasu: ¶ 009; calculate a peripheral object distribution indicating an object existence range where there is a possibility for each object included in a peripheral object group constituted of at least one object existing around a target moving object to exist in an estimation time range,) (Takeyasu: ¶ 215; traffic condition map generation unit 153 generates a traffic condition map in a target time range by merging existence probability maps corresponding to each peripheral object obtained)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Takeyasu with the teachings of Kim because doing so would result in the predicable benefit of "making it possible to consider, when a delay in driving instruction from a driving assistance device to a vehicle occurs, [a] change in a traffic condition that may occur during the delay" (Takeyasu: ¶ 008).
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 20200256681 A1) as applied to claims 1 above, and further in view of Micks et al. (US 20220026919 A1).
Regarding claim 9, as detailed above, Kim teaches the invention as detailed with respect to claim 1. Kim does not explicitly teach:
further comprising: detecting a knowledge refinement criteria, wherein identifying the plurality of nodes of the vehicular knowledge network is based on detecting the knowledge refinement criteria; however, Micks does teach:
further comprising: detecting a knowledge refinement criteria, wherein identifying the plurality of nodes of the vehicular knowledge network is based on detecting the knowledge refinement criteria. (Micks: ¶ 053; vehicle further comprises one or more of a camera and a LIDAR system; the method further comprises determining that one or more of the camera and the LIDAR system are not providing usable data or are damaged; and determining the location of the vehicle based on the perception information from a radar system is performed in response to determining that the camera or LIDAR system are not providing usable data or are damaged.) (Micks: Clm. 001; merging the drive history data with the sensor data and the supplemental data to generate merged data)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Micks with the teachings of Kim because doing so would result in the predicable benefit of "useful data [being] acquired in even very adverse weather conditions." (Micks: ¶ 011).
Regarding claim 10, as detailed above, Kim in view of Micks teaches the invention as detailed with respect to claim 9. Micks further teaches:
wherein the knowledge refinement criteria comprises one of a performance degradation in the first knowledge and an amount of sensor data used to generate the first knowledge being less than a threshold amount. (Micks: ¶ 053; vehicle further comprises one or more of a camera and a LIDAR system; the method further comprises determining that one or more of the camera and the LIDAR system are not providing usable data or are damaged; and determining the location of the vehicle based on the perception information from a radar system is performed in response to determining that the camera or LIDAR system are not providing usable data or are damaged.)
(Micks: Clm. 001; merging the drive history data with the sensor data and the supplemental data to generate merged data)
Claims 11-12, 14-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 20200256681 A1) in view of Cunningham (US 20200043342 A1).
Regarding claim 11, Kim teaches a vehicle comprising:
a memory storing instructions; and one or more processors communicably coupled (Kim: ¶ 016; processor . . . vehicle) to the memory and configured to execute the instructions to: (Kim: ¶ 037; memory) derive a plurality of contextual features of a driving environment based on sensor data collected by a first vehicle in the driving environment, the plurality of contextual features are associated with first knowledge related to the driving environment; (Kim: ¶ 392; dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle. For example, a sign, a traffic light, a vehicle involved with an accident, and the like may be set as dynamic objects. The dynamic object includes at least one of an identification number of an object, a kind of an object, a size of an object, a shape of an object, and location information (e.g., latitude, longitude, altitude) of an object.) . . . receive merged knowledge based on combining the first knowledge with the knowledge of each of the plurality of nodes, (Kim: ¶ 342; module 830 may receive an ADAS MAP [Examiner note: containing previously merged and uploaded data] from the external server.) wherein vehicular operations are performed based on the merged knowledge. (Kim: ¶ 379; can control the vehicle using the ADAS MAP (map information, highly detailed MAP) in which the relative location between the vehicle and the another vehicle is aligned in the lane unit)
Kim does not explicitly teach: establish a communication path through a vehicular knowledge network based on the plurality of contextual features, wherein the path comprises a plurality of nodes, and wherein at least one of the plurality of nodes comprises node knowledge associated with at least one of the plurality of contextual features; and; however, Cunningham does teach:
establish a communication path through a vehicular knowledge network based on the plurality of contextual features, wherein the path comprises a plurality of nodes, and wherein at least one of the plurality of nodes comprises node knowledge associated with at least one of the plurality of contextual features; and (Cunningham: ¶ 056; communication circuit 194 in intermediate vehicle 115 may be used to relay V2V messages between host vehicle 105 and one or more target vehicles 110. In some embodiments, host vehicle 105 communicates with target vehicle 110 by way of intermediate vehicle 115 when the distance between host vehicle 105 and target vehicle 110 is greater than the range of their communication capabilities. For example, based on sensor information, controller 182 in an intermediate vehicle 115 may determine that the distance between host vehicle 105 and target vehicle 110 (i.e., separation distance 120A) exceeds the distance range for DSRC communication).
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Cunningham with the teachings of Kim because doing so would result in the predicable benefit of allowing relaying a message beyond the range of the current vehicle (Cunningham: ¶ 034).
Regarding claim 12, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 11. Kim further teaches:
wherein the plurality of contextual features comprises at least one of static properties and dynamic properties obtained from the driving environment. (Kim: ¶ 398; second map provides location information (x2, y2, z2) for the same traffic light, which information has to be used is a matter.) ◄ (Kim: ¶¶ 390-392; device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon) . . . a first map having . . . a plurality of dynamic objects sensed by at least one electric component. Here, the dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle.)
Regarding claim 14, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 11. Kim further teaches:
wherein the plurality of nodes of the vehicular knowledge network collectively comprise the plurality of contextual features. (Kim: ¶ 390; map providing device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon), and provides the merged map to the electric components) (Kim: Fig. 012)
Regarding claim 15, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 11. Cunningham further teaches:
wherein establish the communication path through the vehicular knowledge network comprises: identifying the plurality of nodes based on the plurality of contextual features. (Cunningham: ¶ 056; communication circuit 194 in intermediate vehicle 115 may be used to relay V2V messages between host vehicle 105 and one or more target vehicles 110. In some embodiments, host vehicle 105 communicates with target vehicle 110 by way of intermediate vehicle 115 when the distance between host vehicle 105 and target vehicle 110 is greater than the range of their communication capabilities. For example, based on sensor information, controller 182 in an intermediate vehicle 115 may determine that the distance between host vehicle 105 and target vehicle 110 (i.e., separation distance 120A) exceeds the distance range for DSRC communication) (Cunningham: ¶ 009; determining a communication blockage between the first vehicle and the second vehicle; and detecting a signal from the first vehicle, wherein the signal is determined to be unstable by the third vehicle.)
Regarding claim 16, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 11. Kim further teaches:
wherein the one or more processors are further configured to execute the instructions to: transmit the first knowledge to a first node of the plurality of nodes based on at least one of the plurality of contextual features. (Kim: ¶ 369; another vehicle to which the V2X message is transmitted may be another vehicle existing within a predetermined distance from the vehicle 100. The predetermined distance may be determined by an available distance of the V2X module or the setting of a user. When a number of other vehicles to which the V2X message is transmitted is plural, the processor 870 may acquire location information of the another vehicle)
Regarding claim 17, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 11. Cunningham further teaches:
wherein the one or more processors are further configured to execute the instructions to: detect a knowledge refinement criteria, wherein establishing the communication path through the vehicular knowledge network is based on detecting the knowledge refinement criteria. (Cunningham: ¶ 056; communication circuit 194 in intermediate vehicle 115 may be used to relay V2V messages between host vehicle 105 and one or more target vehicles 110. In some embodiments, host vehicle 105 communicates with target vehicle 110 by way of intermediate vehicle 115 when the distance between host vehicle 105 and target vehicle 110 is greater than the range of their communication capabilities. For example, based on sensor information, controller 182 in an intermediate vehicle 115 may determine that the distance between host vehicle 105 and target vehicle 110 (i.e., separation distance 120A) exceeds the distance range for DSRC communication) (Cunningham: ¶ 009; determining a communication blockage between the first vehicle and the second vehicle; and detecting a signal from the first vehicle, wherein the signal is determined to be unstable by the third vehicle.)
Regarding claim 19, Kim teaches a system comprising:
a memory storing instructions; and one or more processors communicably (Kim: ¶ 016; processor . . . vehicle) coupled to the memory and configured to execute the instructions to: (Kim: ¶ 037; memory) receive, from a vehicle, a plurality of contextual features associated with knowledge related to a driving environment based on sensor data of the vehicle collected from the driving environment; (Kim: ¶ 392; dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle. For example, a sign, a traffic light, a vehicle involved with an accident, and the like may be set as dynamic objects. The dynamic object includes at least one of an identification number of an object, a kind of an object, a size of an object, a shape of an object, and location information (e.g., latitude, longitude, altitude) of an object.)
Kim does not explicitly teach: execute cycle detection to identify a communication path through a plurality of nodes of vehicular knowledge network, wherein each of the plurality of nodes comprises at least a contextual feature of the plurality of contextual features, and wherein the plurality of nodes collectively comprise the plurality of contextual features; and; however, Cunningham does teach:
execute cycle detection to identify a communication path through a plurality of nodes of vehicular knowledge network, wherein each of the plurality of nodes comprises at least a contextual feature of the plurality of contextual features, and wherein the plurality of nodes collectively comprise the plurality of contextual features; and (Cunningham: ¶ 056; communication circuit 194 in intermediate vehicle 115 may be used to relay V2V messages between host vehicle 105 and one or more target vehicles 110. In some embodiments, host vehicle 105 communicates with target vehicle 110 by way of intermediate vehicle 115 when the distance between host vehicle 105 and target vehicle 110 is greater than the range of their communication capabilities. For example, based on sensor information, controller 182 in an intermediate vehicle 115 may determine that the distance between host vehicle 105 and target vehicle 110 (i.e., separation distance 120A) exceeds the distance range for DSRC communication) (Cunningham: ¶ 009; determining a communication blockage between the first vehicle and the second vehicle; and detecting a signal from the first vehicle, wherein the signal is determined to be unstable by the third vehicle.)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Cunningham with the teachings of Kim because doing so would result in the predicable benefit of allowing relaying a message beyond the range of the current vehicle (Cunningham: ¶ 034).
Regarding claim 20, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 19. Kim further teaches:
wherein the one or more processors are further configured to execute the instructions to: execute knowledge creation based on raw data obtained from one or more connected vehicles, the knowledge creation comprising generating node knowledge and deriving a context of associated with the knowledge. (Kim: ¶¶ 390-392; device 800 merges a plurality of maps received from a plurality of servers into one map (or eHorizon) . . . a first map having . . . a plurality of dynamic objects sensed by at least one electric component. Here, the dynamic object refers to an object which is sensed by electric components such as a camera, a LiDAR, a radar, etc., disposed in the vehicle.) (Kim: ¶ 346; processor 870 may merge the acquired location information of the vehicle and the received location information of the another vehicle into the received map information)
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Cunningham as applied to claim 11 above, and further in view of Takeyasu (US 20230386340 A1).
Regarding claim 13, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 11. Kim does not explicitly teach:
wherein deriving the plurality of contextual features comprises: executing time series analysis on the sensor data to derive the plurality of contextual features; however, Takeyasu does teach:
wherein deriving the plurality of contextual features comprises: executing time series analysis on the sensor data to derive the plurality of contextual features. (Takeyasu: ¶ 076; estimation time determination unit 151 is a function unit to determine a time range and a time interval to generate a traffic condition map.) (Takeyasu: ¶ 008). (Takeyasu: ¶ 009; calculate a peripheral object distribution indicating an object existence range where there is a possibility for each object included in a peripheral object group constituted of at least one object existing around a target moving object to exist in an estimation time range,) (Takeyasu: ¶ 215; traffic condition map generation unit 153 generates a traffic condition map in a target time range by merging existence probability maps corresponding to each peripheral object obtained)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Takeyasu with the teachings of Kim because doing so would result in the predicable benefit of "making it possible to consider, when a delay in driving instruction from a driving assistance device to a vehicle occurs, [a] change in a traffic condition that may occur during the delay" (Takeyasu: ¶ 008).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Cunningham as applied to claim 17 above, and further in view of Micks et al. (US 20220026919 A1).
Regarding claim 18, as detailed above, Kim in view of Cunningham teaches the invention as detailed with respect to claim 17. Kim does not explicitly teach:
wherein the knowledge refinement criteria comprises one of a performance degradation in the first knowledge and an amount of sensor data used to generate the first knowledge being less than a threshold amount; however, Micks does teach:
wherein the knowledge refinement criteria comprises one of a performance degradation in the first knowledge and an amount of sensor data used to generate the first knowledge being less than a threshold amount. (Micks: ¶ 053; vehicle further comprises one or more of a camera and a LIDAR system; the method further comprises determining that one or more of the camera and the LIDAR system are not providing usable data or are damaged; and determining the location of the vehicle based on the perception information from a radar system is performed in response to determining that the camera or LIDAR system are not providing usable data or are damaged.) (Micks: Clm. 001; merging the drive history data with the sensor data and the supplemental data to generate merged data)
Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the teachings of Micks with the teachings of Kim because doing so would result in the predicable benefit of "useful data [being] acquired in even very adverse weather conditions." (Micks: ¶ 011).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure Huang et al. (US 20240062533 A1) which discloses a visual enhancement method and a system based on fusion of spatially aligned features of multiple networked vehicles.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES PALL whose telephone number is (571)272-5280. The examiner can normally be reached on M-F 9:30 - 18:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached on 571-272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.P./ Examiner, Art Unit 3663
/JONATHAN M DAGER/Primary Examiner, Art Unit 3663