Detailed Office Action
Status of Claims
This Office Action is in response to the Applicant’s amendments and remarks filed 11/21/2025. The applicant has amended claims 1, 2, 5-12, 14-16, and 19-20. Claim 9 has been cancelled. Claims 1-8 and 10-20 are presently pending and are presented for examination.
Response to Amendment
The amendment filed 11/21/2025 has been entered. Claims 1-8 and 10-20 remain pending in the application.
Reply to Applicant’s Remarks
Applicant’s remarks filed 11/21/2025 have been fully considered and are addressed as follows:
Claim Interpretation Under 35 U.S.C. 112(f):
Applicant’s amendments to the claims filed 11/21/2025 have caused Claim 1, 15, and 20 to no longer invoke 112(f). Therefore the previous interpretation under 112(f) has been withdrawn.
Claim Rejections Under 35 U.S.C. 112(a):
Applicant’s amendments to the claims filed 11/21/2025 have overcome the 35 U.S.C. 112(a) rejections previously set forth, as Claims 1, 15, and 20 no longer invoke 112(f) and therefore pass the 112(a) written description requirement. The rejection has been withdrawn.
Claim Rejections Under 35 U.S.C. 112(b):
Applicant’s amendments to the claims filed 11/21/2025 have overcome the 35 U.S.C. 112(b) rejections previously set forth. The rejection has been withdrawn.
Claim Rejections Under 35 U.S.C. 112(d):
Applicant’s amendments to the claims filed 11/21/2025 have overcome the 35 U.S.C. 112(d) rejections previously set forth. The rejection has been withdrawn.
Claim Rejections Under 35 U.S.C. 101:
Applicant’s amendments to the claims filed 11/21/2025 have not overcome the 35 U.S.C 101 rejections previously set forth. Regarding the Applicant’s argument that “Claim 1 recites features that cannot be practically performed in the human mind”, the Examiner respectfully disagrees.
The examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
Further, the examiner submits that “Claims can recite a mental process even if they are claimed as being performed on a computer” (See at least MPEP 2106.04(a)(2)(III)(C)).
The act of generating state information where a device is located, abstracting the state space for the environment, a training neural networks in parallel, when provided the proper data and information, can be performed by a human mind either alone or with the aid of pen and paper.
Further, because the claims only recite mental processes and insignificant extra solution activities, there are no additional elements that can integrate the abstract idea into a practical application
See below for detailed rejection.
Claim Rejections Under 35 U.S.C. 103:
Applicant’s arguments, see Arguments/Remarks, filed 11/21/2025, with regard to the rejections of Claims 1, 15, and 20 under 35 U.S.C. 102 have been fully considered, however, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art reference(s).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 and 10-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”).
101 Analysis - With respect to Claim 1
Claim 1, 15, 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis - Step 1:
Claim 1 is directed towards an apparatus which is directed to the statutory category of a machine. Claim 15 is directed towards a method which is directed to the statutory category of a process. Claim 20 is directed towards a non-transitory computer readable medium which is directed to the statutory category of a manufacture. Therefore Claims 1, 15, and 20 are within at least one of the four statutory categories.
101 Analysis- Step 2A Prong One:
Regarding Prong One of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental process.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection.
Claim 1 recites, inter alai:
“An apparatus comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to:
generate, by a device, state information about a part of an environment where the device is positioned
receive, by the device from at least one other device, messages comprising state information about a part of the environment where the at least one other device is positioned, wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages
abstract a state space for the environment with an abstractor neural network based on the generated state information and the received state information messages to provide an abstracted state space
train the abstractor neural network and the reinforcement learning neural network in parallel
generate information for the device by a reinforced learning module for navigation in the environment based on the abstracted state space and further state information generated by the device and received from the at least one other device”
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind.
For example, “generating”, “abstracting” , and “training” in the context of this claim, all encompass a person looking at available data and forming a simple judgement (determination, analysis, comparison, etc.) either manually or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
As drafted, the above claims, under their broadest reasonable interpretation, cover mental processes performed in the human mind (including an observation, evaluation, judgement, opinion), that are merely completed via generic computer components. Accordingly, the claims recite an abstract idea.
Step 2A Prong Two Analysis:
Regarding Prong Two of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”.
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
Claim 1 recites, inter alai:
“An apparatus comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to:
generate, by a device, state information about a part of an environment where the device is positioned
receive, by the device from at least one other device, messages comprising state information about a part of the environment where the at least one other device is positioned, wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages
abstract a state space for the environment with an abstractor neural network based on the generated state information and the received state information messages to provide an abstracted state space
train the abstractor neural network and the reinforcement learning neural network in parallel
generate information for the device by a reinforced learning module for navigation in the environment based on the abstracted state space and further state information generated by the device and received from the at least one other device”
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitation of “at least one processor and at least one memory storing instructions that when executed by the at least one processor, cause the apparatus at least to …”, this limitation merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. See Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). The device(s) and processor(s) are recited at a high level of generality and merely automates the steps.
Regarding the additional limitation of “receive, by the device from at least one other device, messages comprising state information…”, this limitation merely describes the sending and receiving of data which is in insignificant extra solution activity. See MPEP § 2106.05(g).
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Step 2B Analysis:
The claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer components to perform the abstract idea amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Regarding dependent claims 2-8, 10-14 and 16-19, no claim further adds a limitation that introduces any practical applications to the claimed invention, the dependent claims merely add more mental process, mathematical concepts, and post-solution activities and are thus not patent eligible.
Therefore, Claims 1-8 and 10-20 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 10-11, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (US 20190220003 A1) in view of Patil et al (US 20200366563 A1) and Kolouri et al (US 20190294149 A1). Hereafter referred to as Sharma, Patil, and Kolouri respectively.
Regarding Claim 1, Sharma teaches an apparatus comprising: at least one processor and at least one memory storing instructions (see at least Sharma [¶ 28] The term “module” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality)
when executed by the at least one processor, cause the apparatus at least to:
generate, by a device, state information about a part of environment where the device is positioned (see at least Sharma [¶4, 19] FIG. 1 illustrates an overview of an environment for CA/AD vehicles assisted by collaborative three dimensional (3-D) mapping technology of the present disclosure, in accordance with various embodiments....During operation, a CA/AD vehicle uses a number of cameras and other sensors to sense the surrounding environment. This information is sent to the computational systems within the CA/AD vehicle for processing and for navigation use)
receive, by the device from at least one other device, messages comprising state information about a part of the environment where the at least one other device is positioned (see at least Sharma [¶ 14, 20, 46] a CA/AD… to manage a collaborative three-dimensional (3-D) map of an environment around the first CA/AD vehicle, wherein the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the other CA/AD vehicle and incorporate the received indication of the at least the portion of the 3-D map proximate to the first CA/AD vehicle and the other CA/AD vehicle into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller...the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space)
abstract a state space for the environment with an abstractor neural network based on the generated state information and the received state information messages to provide an abstracted state space (see at least Sharma [¶ 17, 46, 52] various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle. Prior to incorporating these various objects into a collaborative 3-D map, a localization technique may be applied to convert the coordinate system of the location of the various classified objects within the particular CA/AD vehicle map to the coordinate system of the collaborative 3-D map….the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier...the location of identified and classified objects may be provided in a compact representation of the 3-D space, which may be used to represent all or portions of a 3-D map local to a vehicle or a collaborative 3-D map) Mapping objects and the surrounding environment via a deep neural network to be represented by a coordinate system and a volumetric mapping representation is a type of abstraction of state information. Such neural network is therefore analogous to an abstractor neural network
generate information for the device by a reinforced learning neural network for navigation in the environment based on the abstracted state space and further state information generated by the device and received from the at least one other device (see at least Sharma [¶ 14, 16-17, 33, 46, 88] the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the other CA/AD vehicle and incorporate the received indication…into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller…various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle…the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space…CA/AD vehicle 102 is configured with a collaborative 3-D map system controller 120 incorporated with the collaborative 3-D mapping technology of the present disclosure to provide CA/AD vehicles 102 with a more accurate collaborative 3-D map to guide/assist CA/AD vehicle 102 in navigating through the environment on roadway 108 to its destination).
However, Sharma does not explicitly teach wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages, or
Patil, in the same field as the endeavor, teaches wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages (see at least Patil [Abstract and ¶ 36, 89, 104-107] A communication management component (CMC) can receive data and metadata from a device, analyze the data and metadata, and, based on the analyzing and data management criteria, determine whether any, all, or a portion of the data is to be communicated to a second device associated with the core network or associated communication network. CMC can be trained, using machine learning, to learn to identify device types, communication protocols, and data payload formats of devices. Based on the analyzing and the training, CMC can determine the device type, communication protocol, and data payload format associated with the device...The MEC component can be particularly relevant in the context of 5G networks in order to provide the low-latency capabilities that can be desired (e.g., wanted or required) for a number of different use cases and solutions (e.g., medical or emergency-related uses and solutions, uses and solutions relating to autonomous vehicles)…A communication device (e.g., 104, 106, . . . )…can refer to any type of wireless device that can communicate with a radio network node in a cellular or mobile communication system. Examples of communication devices can include...a device associated or integrated with a vehicle (e.g., automobile, airplane, bus, train, or ship)… the CMC 604 can comprise or be associated with a machine learning and/or AI engine 620 that can employ one or more desired machine learning and/or AI techniques that can enable the machine learning and/or AI engine 620, and thus, the CMC 604 to learn respective characteristics of or associated with various types of devices (e.g., 608, 610, 612)) The disclosure teaches a device that may be a vehicle that utilizes reinforcement machine learning to identify communication protocols, therefore the disclosure teaches learning the meaning of the messages comprising learning a communication protocol for receiving the messages.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma to contain a system for wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the communication between vehicles as discussed in Patil (see at least Patil [¶ 89] although it is to be appreciated and understood that MEC components also can be useful (e.g., useful to reduce latency or otherwise improve performance in data processing and communications)).
Further, while the combination of Sharma and Patil teach an abstractor neural network and a reinforcement learning neural network, it does not explicitly teach instructions that, when executed by the at least one processor, cause the apparatus at least to: train the abstractor neural network and the reinforcement learning neural network in parallel.
Kolouri, in the same field as the endeavor, teaches instructions that, when executed by the at least one processor, cause the apparatus at least to: train two neural networks in parallel (see at least Kolouri [Abstract, ¶ 54-55, 65-67] Described is a system for controlling autonomous platform. Based on an input image, the system generates a motor control command decision for the autonomous platform....The system according to embodiments of the present disclosure consists of two general modules, namely the decision module (i.e., the learner) and the uncertainty module....During the training phase, both modules are trained in parallel, receiving the same input data…The decision module 400 consists of a deep neural network that is trained for decision making during the training phase in a supervised manner...the decision module 400 according to embodiments of the present disclosure is accompanied with an uncertainty module 402…The uncertainty module 402 receives as input the same training data as the decision module 400. The goal of the uncertainty module 402 is, however, to learn the distribution of the input data. To learn such a distribution, the combination of a deep adversarial convolutional auto-encoder 404 is used together with a unique Sliced Wasserstein Clustering technique (see Literature Reference No. 7). The auto-encoder 404 is an artificial neural network having multiple layers, typically an input layer, a code layer, and an output layer (represented by various sized rectangular shapes)) The disclosure in Kolouri teaches two modules trained in parallel, the decision module and the uncertainty module, it is further explained that these two models may be neural networks, therefore Kolouri teaches training two neural networks in parallel.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma and Patil to contain a system for training the abstractor neural network and the reinforcement learning neural network in parallel, with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the functioning of the system by saving time and resources by training two modules at the same time. Further, as discussed in Kolouri, such a method may also be beneficial by only needing one set of input data when training two neural networks in parallel.
Regarding Claim 2, Sharma in view of Patil and Kolouri teaches all limitations of Claim 1 as set forth above. Sharma further teaches wherein the state information further comprises local observations by the device, position of the device and messages from a previous time slot (see at least Sharma [¶ 16, 56] It is important that a CA/AD vehicle has a comprehensive view of its proximate environment to be able to navigate the environment in a safe and efficient manner. In embodiments described herein, information from other CA/AD vehicles may be shared with the CA/AD vehicle to provide a comprehensive collaborative 3-D map that includes objects in the surrounding environment….there may be various ways to build a collaborative 3-D map of the environment, using the collaborative 3-D Map comparator 252. In embodiments, this may involve different vehicles 202a-202n periodically broadcasting the serialized representation of their octree along with a timestamp and its frame of reference, for example a common reference such as from a high definition (HD) map).
Regarding Claim 3, Sharma in view of Patil and Kolouri teaches all limitations of Claim 1 as set forth above. Sharma further teaches wherein the state information further comprises sensory information (see at least Sharma [¶ 16] It is important that a CA/AD vehicle has a comprehensive view of its proximate environment to be able to navigate the environment in a safe and efficient manner. In embodiments described herein, information from other CA/AD vehicles may be shared with the CA/AD vehicle to provide a comprehensive collaborative 3-D map that includes objects in the surrounding environment. This information may include, for example, portions of one of the other CA/AD vehicle's 3-D map, data from the other CA/AD vehicle's sensors, and positioning data of the other CA/AD vehicle's sensors, that are leveraged to build a collaborative 3-D map of the environment that will include objects blocked from the CA/AD vehicle's view).
Regarding Claim 4, Sharma in view of Patil and Kolouri teaches all limitations of Claim 3 as set forth above. Sharma further teaches wherein the state information further comprises sensory information generated by an imaging device (see at least Sharma [¶ 32] FIG. 1 illustrates an overview of an environment for CA/AD vehicle assisted by collaborative three-dimensional (3-D) mapping technology of the present disclosure…Each of the CA/AD vehicles 102, 104, 106, in general, includes an in-vehicle system 114, sensors 122 and driving control units 124, of various types…the sensors 122 may include camera-based sensors (not shown) and/or LIDAR-based sensors (not shown) that may be positioned around the perimeter of the CA/AD vehicle 102, 104, 106 to capture imagery proximate to the vehicle to identify and categorize relevant objects in the environment.
Regarding Claim 5, Sharma in view of Patil in Kolouri teaches all limitations of Claim 1 as set forth above. Sharma further teaches wherein the instructions when executed by the at least one processor further cause the apparatus to: feed the emerging communication messages into the abstractor neural network and take the emerging communication messages into account in the abstracting (see at least Sharma [¶ 20, 46, 79] Embodiments may be directed to a consensus-based object observer and classifier system to allow CA/AD vehicles generate an accurate 3-D map of their environment by leveraging information from other CA/AD vehicles. The diversity in object classifiers makes the system able to detect objects in the environment that may not be visible to a CA/AD vehicle due to occlusions of the objects with respect to their sensors….the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space. Common key points may be found for the same objects as seen by different vehicles. A fundamental matrix F may be computed that is decomposed into a relative rotation R and translation t… the collaborative 3-D map system controller 420 may include one or more trained neural networks in performing its determinations and/or assessments).
Regarding Claim 10, Sharma in view of Patil in Kolouri teaches all limitations of Claim 1 as set forth above. Sharma further teaches wherein the instructions when executed by the at least one processor further cause the apparatus to: describe local observations by a tree structure (see at least Sharma [¶ 52] Such a compact representation of a 3-D space may be referred to as a volumetric mapping representation. One approach to a volumetric mapping representation is to use an octree. An octree partitions a 3-D space into 8 octants. This partitioning may be done recursively to show increasing detail and may be used to create a 3-D model of environments that is easily updateable, flexible and compact. In embodiments, a 3-D view that each vehicle creates based on sensor 122 data or other techniques may be represented using an octree).
Regarding Claim 11, Sharma in view of Patil and Kolouri teaches all limitations of Claim 10 as set forth above. Sharma further teaches wherein the tree structure comprises a quadtree for a two-dimensional environment or an octree for a three-dimensional environment (see at least Sharma [¶ 52] Such a compact representation of a 3-D space may be referred to as a volumetric mapping representation. One approach to a volumetric mapping representation is to use an octree. An octree partitions a 3-D space into 8 octants. This partitioning may be done recursively to show increasing detail and may be used to create a 3-D model of environments that is easily updateable, flexible and compact. In embodiments, a 3-D view that each vehicle creates based on sensor 122 data or other techniques may be represented using an octree).
Regarding Claim 13, Sharma in view of Patil and Kolouri teaches all limitations of Claim 1 as set forth above. Sharma further teaches wherein the abstracting of the state space comprises local embedding of the state information (see at least Sharma [¶ 17] various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle. Prior to incorporating these various objects into a collaborative 3-D map, a localization technique may be applied to convert the coordinate system of the location of the various classified objects within the particular CA/AD vehicle map to the coordinate system of the collaborative 3-D map).
Regarding Claim 14, Sharma in view of Patil and Kolouri teaches all limitations of Claim 1 as set forth above. Sharma further teaches wherein the abstracting of the state space comprises receiving a graph structure representing a local observation of the entire state space, and modelling the graph structure based on feature vectors for the device to learn a representation vector of the entire graph structure (see at least Sharma [¶ 41, 52, 79] a first CA/AD vehicle 202a may generate sparse 3-D features of the environment, referred to as Oriented FAST and Rotated BRIEF (ORB) features, where each feature is associated with a precise position relative to a well-defined coordinate frame of reference. Another CA/AD vehicle 202b will find the matching features in its field of view...Returning to the data 223a-223n, in embodiments, the location of identified and classified objects may be provided in a compact representation of the 3-D space, which may be used to represent all or portions of a 3-D map local to a vehicle or a collaborative 3-D map… the input variables (x.sub.i) 602 of the neural network are set as a vector containing the relevant variable data, while the output determination or assessment (y.sub.i) 604 of the neural network are also as a vector).
Regarding Claim 15, Sharma teaches a method comprising:
generating, by a device, state information about a part of environment where the device is positioned (see at least Sharma [¶4, 19] FIG. 1 illustrates an overview of an environment for CA/AD vehicles assisted by collaborative three dimensional (3-D) mapping technology of the present disclosure, in accordance with various embodiments....During operation, a CA/AD vehicle uses a number of cameras and other sensors to sense the surrounding environment. This information is sent to the computational systems within the CA/AD vehicle for processing and for navigation use)
receiving, by the device from at least one other device, messages comprising state information about a part of the environment where the at least one other device is positioned (see at least Sharma [¶ 14, 20, 46] a CA/AD… to manage a collaborative three-dimensional (3-D) map of an environment around the first CA/AD vehicle, wherein the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the other CA/AD vehicle and incorporate the received indication of the at least the portion of the 3-D map proximate to the first CA/AD vehicle and the other CA/AD vehicle into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller...the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space)
abstracting the state space for the environment with an abstractor neural network based on the generated state information and the received state information messages to provide an abstracted state space (see at least Sharma [¶ 17, 46, 52] various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle. Prior to incorporating these various objects into a collaborative 3-D map, a localization technique may be applied to convert the coordinate system of the location of the various classified objects within the particular CA/AD vehicle map to the coordinate system of the collaborative 3-D map….the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier...the location of identified and classified objects may be provided in a compact representation of the 3-D space, which may be used to represent all or portions of a 3-D map local to a vehicle or a collaborative 3-D map) Mapping objects and the surrounding environment via a deep neural network to be represented by a coordinate system and a volumetric mapping representation is a type of abstraction of state information. Such neural network is therefore analogous to an abstractor neural network
generating navigation information for the device with the reinforced learning neural network for navigation in the environment based on the abstracted state space and further state information generated by the device and received from the at least one other device (see at least Sharma [¶ 14, 16-17, 33, 46, 88] the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the other CA/AD vehicle and incorporate the received indication…into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller…various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle…the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space…CA/AD vehicle 102 is configured with a collaborative 3-D map system controller 120 incorporated with the collaborative 3-D mapping technology of the present disclosure to provide CA/AD vehicles 102 with a more accurate collaborative 3-D map to guide/assist CA/AD vehicle 102 in navigating through the environment on roadway 108 to its destination).
However, Sharma does not explicitly teach wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages, or
Patil, in the same field as the endeavor, teaches wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages (see at least Patil [Abstract and ¶ 36, 89, 104-107] A communication management component (CMC) can receive data and metadata from a device, analyze the data and metadata, and, based on the analyzing and data management criteria, determine whether any, all, or a portion of the data is to be communicated to a second device associated with the core network or associated communication network. CMC can be trained, using machine learning, to learn to identify device types, communication protocols, and data payload formats of devices. Based on the analyzing and the training, CMC can determine the device type, communication protocol, and data payload format associated with the device...The MEC component can be particularly relevant in the context of 5G networks in order to provide the low-latency capabilities that can be desired (e.g., wanted or required) for a number of different use cases and solutions (e.g., medical or emergency-related uses and solutions, uses and solutions relating to autonomous vehicles)…A communication device (e.g., 104, 106, . . . )…can refer to any type of wireless device that can communicate with a radio network node in a cellular or mobile communication system. Examples of communication devices can include...a device associated or integrated with a vehicle (e.g., automobile, airplane, bus, train, or ship)… the CMC 604 can comprise or be associated with a machine learning and/or AI engine 620 that can employ one or more desired machine learning and/or AI techniques that can enable the machine learning and/or AI engine 620, and thus, the CMC 604 to learn respective characteristics of or associated with various types of devices (e.g., 608, 610, 612)) The disclosure teaches a device that may be a vehicle that utilizes reinforcement machine learning to identify communication protocols, therefore the disclosure teaches learning the meaning of the messages comprising learning a communication protocol for receiving the messages.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma to contain a system for wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the communication between vehicles as discussed in Patil (see at least Patil [¶ 89] although it is to be appreciated and understood that MEC components also can be useful (e.g., useful to reduce latency or otherwise improve performance in data processing and communications)).
Further, while the combination of Sharma and Patil teach an abstractor neural network and a reinforcement learning neural network, it does not explicitly teach training the abstractor neural network and the reinforcement learning neural network in parallel.
Kolouri, in the same field as the endeavor, teaches training two neural networks in parallel (see at least Kolouri [Abstract, ¶ 54-55, 65-67] Described is a system for controlling autonomous platform. Based on an input image, the system generates a motor control command decision for the autonomous platform....The system according to embodiments of the present disclosure consists of two general modules, namely the decision module (i.e., the learner) and the uncertainty module....During the training phase, both modules are trained in parallel, receiving the same input data…The decision module 400 consists of a deep neural network that is trained for decision making during the training phase in a supervised manner...the decision module 400 according to embodiments of the present disclosure is accompanied with an uncertainty module 402…The uncertainty module 402 receives as input the same training data as the decision module 400. The goal of the uncertainty module 402 is, however, to learn the distribution of the input data. To learn such a distribution, the combination of a deep adversarial convolutional auto-encoder 404 is used together with a unique Sliced Wasserstein Clustering technique (see Literature Reference No. 7). The auto-encoder 404 is an artificial neural network having multiple layers, typically an input layer, a code layer, and an output layer (represented by various sized rectangular shapes)) The disclosure in Kolouri teaches two modules trained in parallel, the decision module and the uncertainty module, it is further explained that these two models may be neural networks, therefore Kolouri teaches training two neural networks in parallel.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma and Patil to contain a system for training the abstractor neural network and the reinforcement learning neural network in parallel, with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the functioning of the system by saving time and resources by training two modules at the same time. Further, as discussed in Kolouri, such a method may also be beneficial by only needing one set of input data when training two neural networks in parallel.
Regarding Claim 16, Sharma in view of Patil and Kolouri teaches all limitations of Claim 15 as set forth above. Sharma further teaches wherein the state information further comprises local observations by the device, position of the device and messages from a previous time slot (see at least Sharma [¶ 16, 56] It is important that a CA/AD vehicle has a comprehensive view of its proximate environment to be able to navigate the environment in a safe and efficient manner. In embodiments described herein, information from other CA/AD vehicles may be shared with the CA/AD vehicle to provide a comprehensive collaborative 3-D map that includes objects in the surrounding environment….there may be various ways to build a collaborative 3-D map of the environment, using the collaborative 3-D Map comparator 252. In embodiments, this may involve different vehicles 202a-202n periodically broadcasting the serialized representation of their octree along with a timestamp and its frame of reference, for example a common reference such as from a high definition (HD) map).
Regarding Claim 17, Sharma in view of Patil and Kolouri teaches all limitations of Claim 15 as set forth above. Sharma further teaches wherein the state information further comprises sensory information (see at least Sharma [¶ 16] It is important that a CA/AD vehicle has a comprehensive view of its proximate environment to be able to navigate the environment in a safe and efficient manner. In embodiments described herein, information from other CA/AD vehicles may be shared with the CA/AD vehicle to provide a comprehensive collaborative 3-D map that includes objects in the surrounding environment. This information may include, for example, portions of one of the other CA/AD vehicle's 3-D map, data from the other CA/AD vehicle's sensors, and positioning data of the other CA/AD vehicle's sensors, that are leveraged to build a collaborative 3-D map of the environment that will include objects blocked from the CA/AD vehicle's view).
Regarding Claim 18, Sharma in view of Patil and Kolouri teaches all limitations of Claim 17 as set forth above. Sharma further teaches wherein the state information further comprises sensory information generated by an imaging device (see at least Sharma [¶ 32] FIG. 1 illustrates an overview of an environment for CA/AD vehicle assisted by collaborative three-dimensional (3-D) mapping technology of the present disclosure…Each of the CA/AD vehicles 102, 104, 106, in general, includes an in-vehicle system 114, sensors 122 and driving control units 124, of various types…the sensors 122 may include camera-based sensors (not shown) and/or LIDAR-based sensors (not shown) that may be positioned around the perimeter of the CA/AD vehicle 102, 104, 106 to capture imagery proximate to the vehicle to identify and categorize relevant objects in the environment.
Regarding Claim 19, Sharma in view of Patil and Kolouri teaches all limitations of Claim 15 as set forth above. Sharma further teaches an apparatus further comprising feeding the emerging communication messages into the abstractor neural network and taking the emerging communication messages into account in the abstracting (see at least Sharma [¶ 20, 46, 79] Embodiments may be directed to a consensus-based object observer and classifier system to allow CA/AD vehicles generate an accurate 3-D map of their environment by leveraging information from other CA/AD vehicles. The diversity in object classifiers makes the system able to detect objects in the environment that may not be visible to a CA/AD vehicle due to occlusions of the objects with respect to their sensors….the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space. Common key points may be found for the same objects as seen by different vehicles. A fundamental matrix F may be computed that is decomposed into a relative rotation R and translation t… the collaborative 3-D map system controller 420 may include one or more trained neural networks in performing its determinations and/or assessments).
Regarding Claim 20, Sharma teaches a non-transitory computer readable medium comprising program instructions (see at least Sharma [¶ 91] Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible or non-transitory medium of expression having computer-usable program code embodied in the medium)
that, when executed by at least one processor, cause the apparatus to perform at least the following:
generating state information about a part of environment where the device is positioned (see at least Sharma [¶4, 19] FIG. 1 illustrates an overview of an environment for CA/AD vehicles assisted by collaborative three dimensional (3-D) mapping technology of the present disclosure, in accordance with various embodiments....During operation, a CA/AD vehicle uses a number of cameras and other sensors to sense the surrounding environment. This information is sent to the computational systems within the CA/AD vehicle for processing and for navigation use)
receiving, by the device from at least one other device, messages comprising state information about a part of the environment where the at least one other device is positioned (see at least Sharma [¶ 14, 20, 46] a CA/AD… to manage a collaborative three-dimensional (3-D) map of an environment around the first CA/AD vehicle, wherein the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the other CA/AD vehicle and incorporate the received indication of the at least the portion of the 3-D map proximate to the first CA/AD vehicle and the other CA/AD vehicle into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller...the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space)
abstracting the state space for the environment with an abstractor neural network based on the generated state information and the received state information messages to provide an abstracted state space (see at least Sharma [¶ 17, 46, 52] various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle. Prior to incorporating these various objects into a collaborative 3-D map, a localization technique may be applied to convert the coordinate system of the location of the various classified objects within the particular CA/AD vehicle map to the coordinate system of the collaborative 3-D map….the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier...the location of identified and classified objects may be provided in a compact representation of the 3-D space, which may be used to represent all or portions of a 3-D map local to a vehicle or a collaborative 3-D map) Mapping objects and the surrounding environment via a deep neural network to be represented by a coordinate system and a volumetric mapping representation is a type of abstraction of state information. Such neural network is therefore analogous to an abstractor neural network
generating navigation information for the device with the reinforced learning neural network for navigation in the environment based on the abstracted state space and further state information generated by the device and received from the at least one other device (see at least Sharma [¶ 14, 16-17, 33, 46, 88] the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the other CA/AD vehicle and incorporate the received indication…into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller…various objects that have been classified in the surrounding environment are represented in a particular CA/AD vehicle map and may be associated with a coordinate system local to the particular CA/AD vehicle…the 3-D map is aligned directly by using the objects and bounding boxes that are determined by running a deep neural network classifier. Once this semantic information is available it may be used to find common objects and their location in the volumetric space…CA/AD vehicle 102 is configured with a collaborative 3-D map system controller 120 incorporated with the collaborative 3-D mapping technology of the present disclosure to provide CA/AD vehicles 102 with a more accurate collaborative 3-D map to guide/assist CA/AD vehicle 102 in navigating through the environment on roadway 108 to its destination).
However, Sharma does not explicitly teach wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages, or
Patil, in the same field as the endeavor, teaches wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages (see at least Patil [Abstract and ¶ 36, 89, 104-107] A communication management component (CMC) can receive data and metadata from a device, analyze the data and metadata, and, based on the analyzing and data management criteria, determine whether any, all, or a portion of the data is to be communicated to a second device associated with the core network or associated communication network. CMC can be trained, using machine learning, to learn to identify device types, communication protocols, and data payload formats of devices. Based on the analyzing and the training, CMC can determine the device type, communication protocol, and data payload format associated with the device...The MEC component can be particularly relevant in the context of 5G networks in order to provide the low-latency capabilities that can be desired (e.g., wanted or required) for a number of different use cases and solutions (e.g., medical or emergency-related uses and solutions, uses and solutions relating to autonomous vehicles)…A communication device (e.g., 104, 106, . . . )…can refer to any type of wireless device that can communicate with a radio network node in a cellular or mobile communication system. Examples of communication devices can include...a device associated or integrated with a vehicle (e.g., automobile, airplane, bus, train, or ship)… the CMC 604 can comprise or be associated with a machine learning and/or AI engine 620 that can employ one or more desired machine learning and/or AI techniques that can enable the machine learning and/or AI engine 620, and thus, the CMC 604 to learn respective characteristics of or associated with various types of devices (e.g., 608, 610, 612)) The disclosure teaches a device that may be a vehicle that utilizes reinforcement machine learning to identify communication protocols, therefore the disclosure teaches learning the meaning of the messages comprising learning a communication protocol for receiving the messages.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma to contain a system for wherein meaning of the messages is learned by a reinforcement learning neural network based on emergent communication, the learning of the meaning comprising learning a communication protocol for receiving the messages with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the communication between vehicles as discussed in Patil (see at least Patil [¶ 89] although it is to be appreciated and understood that MEC components also can be useful (e.g., useful to reduce latency or otherwise improve performance in data processing and communications)).
Further, while the combination of Sharma and Patil teach an abstractor neural network and a reinforcement learning neural network, it does not explicitly teach training the abstractor neural network and the reinforcement learning neural network in parallel.
Kolouri, in the same field as the endeavor, teaches training two neural networks in parallel (see at least Kolouri [Abstract, ¶ 54-55, 65-67] Described is a system for controlling autonomous platform. Based on an input image, the system generates a motor control command decision for the autonomous platform....The system according to embodiments of the present disclosure consists of two general modules, namely the decision module (i.e., the learner) and the uncertainty module....During the training phase, both modules are trained in parallel, receiving the same input data…The decision module 400 consists of a deep neural network that is trained for decision making during the training phase in a supervised manner...the decision module 400 according to embodiments of the present disclosure is accompanied with an uncertainty module 402…The uncertainty module 402 receives as input the same training data as the decision module 400. The goal of the uncertainty module 402 is, however, to learn the distribution of the input data. To learn such a distribution, the combination of a deep adversarial convolutional auto-encoder 404 is used together with a unique Sliced Wasserstein Clustering technique (see Literature Reference No. 7). The auto-encoder 404 is an artificial neural network having multiple layers, typically an input layer, a code layer, and an output layer (represented by various sized rectangular shapes)) The disclosure in Kolouri teaches two modules trained in parallel, the decision module and the uncertainty module, it is further explained that these two models may be neural networks, therefore Kolouri teaches training two neural networks in parallel.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma and Patil to contain a system for training the abstractor neural network and the reinforcement learning neural network in parallel, with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the functioning of the system by saving time and resources by training two modules at the same time. Further, as discussed in Kolouri, such a method may also be beneficial by only needing one set of input data when training two neural networks in parallel.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (US 20190220003 A1) in view of Patil et al (US 20200366563 A1), Kolouri et al (US 20190294149 A1), and Groh (DE 102018216079 A1). Hereafter referred to as Sharma, Patil, Kolouri, and Groh respectively.
Regarding Claim 6, Sharma in view of Patil and Kolouri teaches all limitations of Claim 1 as set forth above. However, while the combination of Sharma, Patil, and Kolouri teaches the training of a reinforcement neural network, the combination does not explicitly teach wherein training starts with exchange of messages randomly selected from a predefined number of available messages.
Groh, in the same field as the endeavor, teaches wherein training of the a machine learning system starts with exchange of messages randomly selected from a predefined number of available messages (see at least Groh [English Translation pg.1 para.1, pg.5 para.4] Method for operating an at least partially autonomous robot, in particular an automated motor vehicle, by means of a control system (40) which comprises a machine learning system (60) which has been suitably trained....8th schematically shows an embodiment of a training system 140 to train the machine learning system 60 . A training data unit 150 determines suitable input variables x that the machine learning system 60 be fed. For example, the training data unit intervenes 150 to a computer-implemented database Q to, in which a set of training data is stored and, for example, randomly selects input variables from the set of training data x out).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma to contain a system for randomly selecting inputs for training of the reinforcement learning neural network with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the learning of the system by avoiding biases when selecting for input training data.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (US 20190220003 A1) in view of Patil et al (US 20200366563 A1), Kolouri et al (US 20190294149 A1), Groh (DE 102018216079 A1) and Muehlenstaedt et al (US 20220230021 A1). Hereafter referred to as Sharma, Patil, Kolouri, Groh, and Muehlenstaedt respectively.
Regarding Claim 7, Sharma in view of Patil, Kolouri, and Groh teaches all limitations of Claim 6 as set forth above. However, the combination does not explicitly teach wherein the instructions when executed by the at least one processor further cause the apparatus to: learn how to partition the search space into different categories and assign the messages into the categories.
Muehlenstaedt, in the same field as the endeavor, teaches learning how to partition the search space into different categories and assign the messages into the categories (see at least Muehlenstaedt [¶ 16] A variety of algorithms for control and navigation of autonomous vehicles, such as object detection algorithms for detecting objects in images, use machine learning models that are built using labeled data (e.g., training data, test data, validation data, etc.)…In active learning, “useful” data (e.g., an image having a wrongly predicted label, or an uncertain prediction label, etc.) is selected for subsequent training of a machine learning model, instead of passively accepting randomly selected data. Active learning can significantly reduce the amount of training data required, compared to passive learning while achieving similar classification accuracy as passive learning).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in the combination of Sharma, Patil, Kolouri, and Groh to contain a system for partitioning the search space into difference categories and assigning the messages into different categories with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the learning of the system by decreasing the computing cost of the training as discussed in Muehlenstaedt (see at least Muehlenstaedt [¶ 16] While such training such models require a large amount of training data (i.e., labeled images), it is not feasible to use all or majority of data collected by an autonomous vehicle because of processing, cost, memory and transmission constraints….However, such random selection of training data requires expensive labeling which might not improve the training of the machine learning model (e.g., when the randomly selected training data does not include useful information)).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (US 20190220003 A1) in view of Patil et al (US 20200366563 A1), Kolouri et al (US 20190294149 A1), Groh (DE 102018216079 A1) and Takahashi et al (US 20220113724 A1). Hereafter referred to as Sharma, Patil, Kolouri, Groh, and Takahashi respectively.
Regarding Claim 8, Sharma in view of Patil, Kolouri, and Groh teaches all limitations of Claim 6 as set forth above. However, the combination does not explicitly teach wherein the instructions when executed by the at least one processor further cause the apparatus to: select a message based on at least one of: local observation, position, or a message from a previous iteration step.
Takahashi, in the same field as the endeavor, teaches selecting a message based on at least one of: local observation, position, or a message from a previous iteration step (see at least Takahashi [¶ 38] The training data is data in which the image information, the tactile information, and at least one of the position and the posture of the object 500 (correct answer data) are associated with each other, for example. The training unit 102 trains using such training data, which provides a neural network that outputs output data indicating at least one of the position and the posture of the object 500 to the input image information and tactile information).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in the combination of Sharma, Patil, Kolouri, and Groh to contain a system for selecting a message based on position with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of training a vehicle’s neural network with position data, a practice that is well known in the art.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma et al (US 20190220003 A1) in view of Patil et al (US 20200366563 A1), Kolouri et al (US 20190294149 A1), Zhang et al (CN 112414401 A) and Qu (WO 2022171066 A1). Hereafter referred to as Zhang and Qu respectively.
Regarding Claim 12, Sharma in view of Patil and Kolouri teaches all limitations of Claim 1 as set forth above. However, Sharma does not explicitly teach wherein the instructions when executed by the at least one processor further cause the apparatus to: input to the abstractor neural network of a feature matrix, an adjacency matrix, current position of the device, and at least one message from the at least one other device;
and generate an abstracted feature matrix and an abstracted adjacency matrix, and input of the abstracted matrices into the reinforced learning module.
Zhang, in the same field as the endeavor, teaches inputting to the abstractor neural network of a feature matrix, an adjacency matrix, current position of the device, and at least one message from at least one other device (see at least Zhang [English Translation: Abstract and pg.5 para.9-12, pg.3 para.10] The invention claims an unmanned aerial vehicle cooperative positioning system based on graph neural network…the input of the graph convolution network further comprises a graph adjacency matrix, the graph adjacency matrix for describing the adjacent relation between the unmanned aerial vehicle and other unmanned aerial vehicle, the graph adjacency matrix can be obtained according to a plurality of unmanned aerial vehicle own state information....the unmanned aerial vehicle self low-dimensional characteristic is obtained by the full connection network; specifically, the unmanned aerial vehicle own state information and the target position information as input of the full connection network…the server is used for sending the initial target position to the unmanned aerial vehicle, receiving the unmanned aerial vehicle self state information transmitted by each unmanned aerial vehicle and the detected target position information, and transmitting the received information to the other unmanned aerial vehicle).
Qu, in the same field as the endeavor, teaches generating an abstracted feature matrix and an abstracted adjacency matrix, and input the abstracted feature matrix and the abstracted adjacency matrix into the reinforced learning neural network (see at least Qu [English Translation pg.23 para.2] The third step is to preprocess the computation graph and the resource subgraph respectively, specifically determining the input feature set (also called input feature, or input feature matrix) and the adjacency matrix corresponding to the computation graph, and determining each resource subgraph…The input feature set (also referred to as input feature, or input feature matrix) corresponding to each resource sub-graph and the adjacency matrix are input to the feature extraction module for feature extraction to obtain a second feature set. Wherein, exemplarily, the feature extraction module may be implemented by a graph convolutional neural network (GCN)).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Sharma to contain a system for inputting to the abstractor module of a feature matrix, an adjacency matrix, current position of the device, and at least one message from at least one other device and generating an abstracted feature matrix and an abstracted adjacency matrix, and input of the abstracted matrices into the reinforced learning module with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for the benefits discussed in Qu (see at least Qu [English Translation pg.25 para.5] The feature extraction module extracts the node and topology features of the resource subgraph and the computation graph respectively, and performs feature fusion. Realize deep perception, feature extraction and feature matching of dimension features such as computing power, storage, and communication that play a key role in the performance of deep learning computing tasks).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A YANOSKA whose telephone number is (703)756-5891. The examiner can normally be reached M-F 9:00am to 5:00pm (Pacific Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH ANDERSON YANOSKA/Examiner, Art Unit 3664
/RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664