DETAILED ACTION
This action is in response to the amendment filed 01/27/2026. Claims 1-5, 7-11, 13-17, 19-20 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 13-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 13 recites the limitation "the computer-readable storage medium" in line 4. There is insufficient antecedent basis for this limitation in the claim. It is unclear as to whether this computer-readable storage medium is referencing the “non-transitory computer-readable storage medium” from line 1. For purposes of examination, Examiner has interpreted this computer-readable storage medium to be referencing the “non-transitory computer-readable storage medium” from line 1.
Regarding claims 14-17, claims 14-17 are rejected for at least the same reasons as claim 13 since claims 14-17 depend on claim 13.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites a method and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 1 recites
Making…an optimized decision on the first function module based on the related information of the second agent. (This limitation is a mental process as it encompasses a human mentally making a decision.)
Therefore, claim 1 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 1 further recites additional elements of
An agent decision-making method comprising: obtaining by a first agent deployed at a first protocol layer, related information of the second agent deployed at a second protocol layer, wherein the first protocol layer is different from the second protocol layer (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).)
the first agent is deployed on a first function module, and the second agent is deployed on a second function module (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
by the first agent (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
the first agent is implemented by a processor (This element does not integrate the abstract idea into a practical application because it generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
the first function module is an audio/video coding module located in a communications system (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
the second function module is different from the first function module(This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 1 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because
An agent decision-making method comprising: obtaining by a first agent deployed at a first protocol layer, related information of the second agent deployed at a second protocol layer, wherein the first protocol layer is different from the second protocol layer is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
the first agent is deployed on a first function module, and the second agent is deployed on a second function module uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
by the first agent uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
the first agent is implemented by a processor uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
the first function module is an audio/video coding module located in a communications system specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
the second function module is different from the first function module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 1 is subject-matter ineligible.
Regarding Claim 2:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 2 recites the same abstract idea as claim 1. Therefore, claim 2 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 2 further recites additional elements of
wherein the related information of the second agent comprises at least one of the following information: a first evaluation parameter made by the second agent for a historical decision of the first agent, a historical decision of the second agent, a neural network parameter of the second agent, or an update gradient of the neural network parameter of the second agent. (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).)
Therefore, claim 2 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 2 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the related information of the second agent comprises at least one of the following information: a first evaluation parameter made by the second agent for a historical decision of the first agent, a historical decision of the second agent, a neural network parameter of the second agent, or an update gradient of the neural network parameter of the second agent is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
Therefore, claim 2 is subject-matter ineligible.
Regarding Claim 3:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 3 recites
wherein the making … the decision on the first function module based on the related information of the second agent further comprises: making … the decision on the first function module based on the related information of the second agent, and further based on at least one of related information of the first function module or related information of the second function module (This limitation is a mental process as it encompasses a human mentally making a decision.)
Therefore, claim 3 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 3 further recites additional elements of
by the first agent (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
Therefore, claim 3 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 3 do not provide significantly more than the abstract idea itself, taken alone and in combination because
By the first agent uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 3 is subject-matter ineligible.
Regarding Claim 4:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 4 recites
Wherein the method further comprises making… the decision on the first function module based on the related information of the second agent, and further based on the related information of the first function module and the related information of the second function module, (This limitation is a mental process as it encompasses a human mentally making a decision.) and:
Therefore, claim 4 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 4 further recites additional elements of
by the first agent (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
the related information of the first function module comprises at least one of the following information: current environment status information of the first function module, predicted environment status information of the first function module, or a second evaluation parameter made by the first function module for the historical decision of the first agent; (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)) since obtaining this type of data still data gathering.)
the related information of the second function module comprises at least one of current environment status information of the second function module or predicted environment status information of the second function module. (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)) since obtaining this type of data still data gathering.)
Therefore, claim 4 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 4 do not provide significantly more than the abstract idea itself, taken alone and in combination because
by the first agent uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
the related information of the first function module comprises at least one of the following information: current environment status information of the first function module, predicted environment status information of the first function module, or a second evaluation parameter made by the first function module for the historical decision of the first agent is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
the related information of the second function module comprises at least one of current environment status information of the second function module or predicted environment status information of the second function module is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
Therefore, claim 4 is subject-matter ineligible.
Regarding Claim 5:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 5 recites the same abstract ideas as claim 1. Therefore, claim 1 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 5 further recites additional elements of
the first function module comprises one of a radio link control (RLC) layer function module, a medium access control (MAC) layer function module, or a physical (PHY) layer function module; (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
the second function module comprises at least one function module other than the first function module. (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 5 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 5 do not provide significantly more than the abstract idea itself, taken alone and in combination because
the first function module comprises one of a radio link control (RLC) layer function module, a medium access control (MAC) layer function module, or a physical (PHY) layer function module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
the second function module comprises at least one function module other than the first function module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 5 is subject-matter ineligible.
Regarding Claim 7:
Subject Matter Eligibility Analysis Step 1:
Claim 7 recites an apparatus and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 7 recites
make an optimized decision on the first function module based on the related information of the second agent. (This limitation is a mental process as it encompasses a human mentally making a decision.)
Therefore, claim 7 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 7 further recites additional elements of
A communications apparatus, comprising: a first function module…; a second function module…; a first agent deployed at a first protocol layer, the first agent configured in the first function module; and a second agent deployed at a second protocol layer, the second agent configured in the second function module, wherein the first protocol layer is different from the second protocol layer (This element does not integrate the abstract idea into a practical application because it recites generic computer components to perform the abstract idea (see MPEP 2106.05(f)).)
the first function module is an audio/video coding module (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
the second function module is a communications module (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
The first agent comprises: a communications interface, (This element does not integrate the abstract idea into a practical application because it recites generic computer components to perform the abstract idea (see MPEP 2106.05(f)).)
obtain related information of the second agent; (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).)
A processing circuit, (This element does not integrate the abstract idea into a practical application because it recites generic computer components to perform the abstract idea (see MPEP 2106.05(f)).)
Therefore, claim 7 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 7 do not provide significantly more than the abstract idea itself, taken alone and in combination because
A communications apparatus, comprising: a first function module…; a second function module…; a first agent deployed at a first protocol layer, the first agent configured in the first function module; and a second agent deployed at a second protocol layer, the second agent configured in the second function module, wherein the first protocol layer is different from the second protocol layer uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
the first function module is an audio/video coding module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
the second function module is a communications module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
The first agent comprises: a communications interface uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
obtain related information of the second agent is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
A processing circuit uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
Therefore, claim 7 is subject-matter ineligible.
Regarding claim 8, claim 8 recites substantially similar limitations to claim 2 and is therefore rejected under the same analysis.
Regarding claim 9, claim 9 recites substantially similar limitations to claim 3 and is therefore rejected under the same analysis.
Regarding claim 10, claim 10 recites substantially similar limitations to claim 4 and is therefore rejected under the same analysis.
Regarding claim 11, claim 11 recites substantially similar limitations to claim 5 and is therefore rejected under the same analysis.
Regarding Claim 13:
Subject Matter Eligibility Analysis Step 1:
Claim 13 recites a non-transitory computer-readable storage medium and is thus a process, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 13 recites
Making … an optimized decision on the first function module based on the related information of the second agent (This limitation is a mental process as it encompasses a human mentally making a decision.)
Therefore, claim 13 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 13 further recites additional elements of
A non-transitory computer-readable storage medium wherein the computer-readable storage medium stores program instructions, and when the program instructions are run by a processor, the operations implemented by the communications system (This element does not integrate the abstract idea into a practical application because it generic computer components to perform the abstract idea. (see MPEP 2106.05(f)).)
obtaining by a first agent deployed at a first protocol layer, related information of the second agent deployed at a second protocol layer, wherein the first protocol layer is different from the second protocol layer (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).)
the first agent is deployed on a first function module, and the second agent is deployed on a second function module (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
by the first agent (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).)
the first agent is implemented by a processor (This element does not integrate the abstract idea into a practical application because it generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).)
the first function module is an audio/video coding module located in a communications system (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
the second function module is different from the first function module(This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 13 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 13 do not provide significantly more than the abstract idea itself, taken alone and in combination because
A non-transitory computer-readable storage medium wherein the computer-readable storage medium stores program instructions, and when the program instructions are run by a processor, the operations implemented by the communications system uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
obtaining by a first agent deployed at a first protocol layer, related information of the second agent deployed at a second protocol layer, wherein the first protocol layer is different from the second protocol layer is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)).
the first agent is deployed on a first function module, and the second agent is deployed on a second function module uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
by the first agent uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
the first agent is implemented by a processor uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)).
the first function module is an audio/video coding module located in a communications system specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
the second function module is different from the first function module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 13 is subject-matter ineligible.
Regarding claim 14, claim 14 recites substantially similar limitations to claim 2 and is therefore rejected under the same analysis.
Regarding claim 15, claim 15 recites substantially similar limitations to claim 3 and is therefore rejected under the same analysis.
Regarding claim 16, claim 16 recites substantially similar limitations to claim 4 and is therefore rejected under the same analysis.
Regarding claim 17, claim 17 recites substantially similar limitations to claim 5 and is therefore rejected under the same analysis.
Regarding Claim 19:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 19 recites the same abstract ideas as claim 1. Therefore, claim 19 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 19 further recites additional elements of
wherein the first agent and the second agent are deployed at an RLC layer, a MAC layer, or a PHY layer (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 19 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 19 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the first agent and the second agent are deployed at an RLC layer, a MAC layer, or a PHY layer specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 19 is subject-matter ineligible.
Regarding Claim 20:
Subject Matter Eligibility Analysis Step 2A Prong 1:
Claim 20 recites the same abstract idea as claim 1. Therefore, claim 20 recites an abstract idea.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Claim 20 further recites additional elements of
wherein the second function module is a communications module (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).)
Therefore, claim 20 is not integrated into a practical application.
Subject Matter Eligibility Analysis Step 2B:
The additional elements of claim 20 do not provide significantly more than the abstract idea itself, taken alone and in combination because
wherein the second function module is a communications module specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)).
Therefore, claim 20 is subject-matter ineligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 7-10, 13-16, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hu et al. (US 2019/0266489 A1) (hereafter referred to as Hu).
Regarding claim 1, Hu teaches
An agent decision-making method comprising: obtaining, by a first agent deployed at a first protocol layer, related information of a second agent deployed at a second protocol layer, wherein the first protocol layer is different from the second protocol layer, the first agent is deployed on a first function module, and the second agent is deployed on a second function module (Hu, page 28, paragraph 0056, “The system may further include a communication interface 150 which enables the CM3 policy network 140 to be transmitted to other devices, such as a server 160, which may include a CM3 database 162” where “the learning which may occur in stage two may be achieved by sharing data learned by a first agent with a second agent and vice versa (e.g., sharing data learned by the second agent with the first agent)” (Hu, page 30, paragraph 0075) and where “when the CM3 policy network is stored on the storage device of the vehicle, this enables the controller to autonomously drive the vehicle around based on the CM3 policy network 140, and to make autonomous driving decisions based on the CM3 reinforcement learning” (Hu, page 28, paragraph 0059) and Hu, page 2 Figure 1.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that the first agent is the second vehicle controller and the second agent is the first vehicle controller. Examiner further notes that sharing data learned by the first vehicle’s controller with the second vehicle’s controller is obtaining related information of the second agent. Additionally, 176 is the first agent deployed at a first protocol layer 172, and 186 is the second agent deployed at a second protocol layer 182. Examiner further notes that both the first and second vehicle controllers, or agents, are deployed on the vehicles, or function modules.);
and making, by the first agent, an optimized decision on the first function module based on the related information of the second agent (Hu, page 30, paragraph 0080, “The processor 102 or the simulator 108 may generate a CM3 network policy based on the first agent neural network and the second agent neural network….The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108” where “when the CM3 policy network is stored on the storage device of the vehicle, this enables the controller to autonomously drive the vehicle around based on the CM3 policy network 140, and to make autonomous driving decisions based on the CM3 reinforcement learning” (Hu, page 28, paragraph 0059) and where “the simulator 108 may optimize the CM3 network policy based on the local view and the global view”(Hu, page 30, paragraph 0081). Examiner notes that the controller of the autonomous vehicle is the first agent which is in the first vehicle or function module and the related information of the second vehicle’s controller is the data received from a first autonomous vehicle or agent. The action performed is the optimized decision. Additionally, the processor is the processing circuit.)
the first agent is implemented by a processor (Hu, page 26, paragraph 0012, “A vehicle for interaction-aware decision making may include a controller, one or more vehicle systems, and a vehicle communication interface. The controller may include a processor and a memory”)
the first function module is an audio/video coding module located in a communications system (Hu, page 27, paragraph 0044, “A ‘vehicle system’, as used herein, may be any automatic or manual systems that may be used to enhance the vehicle, driving, and/or safety. Exemplary vehicle systems include an autonomous driving system, an electronic stability control system, an anti-lock brake system, …visual devices (e.g., camera systems, proximity sensor systems), …an audio system” and (Hu, page 2 Figure 1 see below.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that the vehicle 170, or first function module, is an audio/video coding module since it has an audio system. Examiner further notes that Figure 1 displays vehicle 170 as part of a communications system.)
and the second function module is different from the first function module (Hu, page 2 Figure 1.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that vehicle 180, the second function module, is distinct and thus different than vehicle 170.).
Regarding claim 2, Hu teaches
The method according to claim 1, wherein the related information of the second agent comprises at least one of the following information: a first evaluation parameter made by the second agent for a historical decision of the first agent, a historical decision of the second agent, a neural network parameter of the second agent, or an update gradient of the neural network parameter of the second agent (Hu, page 30, paragraph 0077, “The second agent neural network may be associated with an oothers parameter for each of the N number of agents indicative of a local observation of each of the corresponding N number of agents” where “by having the simulator 108 and critic observe the number of N number of agents, learning for different scenarios may occur in parallel. Stated another way, the learning which may occur in stage two may be achieved by sharing data learning by a first agent with a second agent and vice versa “(Hu, page 30, paragraph 0075) and “according to one aspect, parameter-sharing may be provided among one or more to all of the agents by the simulator 108” (Hu, page 31, paragraph 0086) and where “The processor 102 or the simulator 108 may generate a CM3 network policy based on the first agent neural network and the second agent neural network. … The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example, the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle. Examiner notes that the other agents sharing parameters with the second agent neural network via their controllers, is the first agent obtaining related information or a neural network parameter of the second agent.).
Regarding claim 3, Hu teaches
The method according to claim 1, wherein the making, by the first agent, the decision on the first function module based on the related information of the second agent further comprises: making, by the first agent, the decision on the first function module based on the related information of the second agent, and further based on at least one of related information of the first function module or related information of the second function module; (Hu, page 30, paragraph 0080, “The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108.” Examiner notes that the observation of the controller of the autonomous vehicle is the related information of the first function module and the related information of the second agent is the data received from a first autonomous vehicle’s controller. The action performed is the decision.).
Regarding claim 4, Hu teaches
The method according to claim 3, wherein the method further comprises making, by the first agent, the decision on the first function module based on the related information of the second agent, and further based on at least one of related information of the first function module and related information of the second function module (Hu, page 30, paragraph 0080, “The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108.” Examiner notes that the observation of the controller of the autonomous vehicle is the related information of the first function module and the related information of the second agent is the data received from a first autonomous vehicle’s controller. Examiner further notes that the related information of the second agent is the related information of the second function module. The action performed is the decision.):
the related information of the first function module comprises at least one of the following information: current environment status information of the first function module, predicted environment status information of the first function module, or a second evaluation parameter made by the first function module for the historical decision of the first agent (Hu, page 30, paragraph 0080, “The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108.” Examiner notes that the observation the autonomous vehicle is the related information of the first function module.);
and the related information of the second function module comprises at least one of current environment status information of the second function module or predicted environment status information of the second function module (Hu, page 30, paragraph 0080, “The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108.” Examiner notes that the observation of the autonomous vehicle is the related information of the second function module.).
Regarding claim 7, Hu teaches
A communications apparatus, comprising: a first function module, wherein the first function module is an audio/video coding module; a second function module, wherein the second function module is a communications module; a first agent deployed at a first protocol layer, the first agent configured in the first function module; and a second agent deployed at a second protocol layer, the second agent configured in the second function module, wherein the first protocol layer is different from the second protocol layer (Hu, page 30, paragraph 0075, “the learning which may occur in stage two may be achieved by sharing data learned by a first agent with a second agent and vice versa” where “an ‘agent’, as used herein, may refer to a ‘vehicle’, such as a vehicle within a simulation or a simulated vehicle” (Hu, page 27, paragraph 0043) and “the first vehicle may be equipped with a vehicle communication interface 172, a storage device 174, a controller 176, and one or more vehicle systems….similarly, the second vehicle 180 may be equipped with a vehicle communication interface 182, a storage device 184, a controller 186, and one or more vehicle systems” (Hu, page 28, paragraph 0057-0058) where “A ‘vehicle system’, as used herein, may be any automatic or manual systems that may be used to enhance the vehicle, driving, and/or safety. Exemplary vehicle systems include an autonomous driving system, an electronic stability control system, an anti-lock brake system, …visual devices (e.g., camera systems, proximity sensor systems), …an audio system” (Hu, page 27, paragraph 0044) and Hu, page 2 Figure 1.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
. Examiner notes that the communications system is vehicles sharing data, the first function module is the second agent or vehicle and the second function module is the first agent or vehicle. Examiner further notes that the vehicles are configured with controllers which are the first and second agents. Additionally, 176 is the first agent deployed at a first protocol layer 172 and 186 is the second agent deployed at a second protocol layer 182. Examiner notes that the vehicle 170, or first function module, is an audio/video coding module since it has an audio system. Examiner further notes that Figure 1 displays vehicle 180 as a communications module since it has a communication interface.),
wherein the first agent comprises: a communications interface configured to obtain related information of the second agent (Hu, page 28, paragraph 0056, “The system may further include a communication interface 150 which enables the CM3 policy network 140 to be transmitted to other devices, such as a server 160, which may include a CM3 database 162” where “the learning which may occur in stage two may be achieved by sharing data learned by a first agent with a second agent and vice versa (e.g., sharing data learned by the second agent with the first agent)” (Hu, page 30, paragraph 0075) and where “when the CM3 policy network is stored on the storage device of the vehicle, this enables the controller to autonomously drive the vehicle around based on the CM3 policy network 140, and to make autonomous driving decisions based on the CM3 reinforcement learning” (Hu, page 28, paragraph 0059). Examiner notes that the first agent is the second vehicle controller and the second agent is the first vehicle controller. Examiner further notes that sharing data learned by the first vehicle’s controller with the second vehicle’s controller is obtaining related information of the second agent.);
and a processing circuit configured to make an optimized decision on the first function module based on the related information of the second agent (Hu, page 30, paragraph 0080, “The processor 102 or the simulator 108 may generate a CM3 network policy based on the first agent neural network and the second agent neural network….The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108” where “when the CM3 policy network is stored on the storage device of the vehicle, this enables the controller to autonomously drive the vehicle around based on the CM3 policy network 140, and to make autonomous driving decisions based on the CM3 reinforcement learning” (Hu, page 28, paragraph 0059) and where “the simulator 108 may optimize the CM3 network policy based on the local view and the global view”(Hu, page 30, paragraph 0081). Examiner notes that the controller of the autonomous vehicle is the first agent which is in the first vehicle or function module and the related information of the second vehicle’s controller is the data received from a first autonomous vehicle or agent. The action performed is the optimized decision. Additionally, the processor is the processing circuit.).
Regarding claim 8, claim 8 recites substantially similar limitations to claim 2 and is therefore rejected under the same analysis.
Regarding claim 9, claim 9 recites substantially similar limitations to claim 3 and is therefore rejected under the same analysis.
Regarding claim 10, claim 10 recites substantially similar limitations to claim 4 and is therefore rejected under the same analysis.
Regarding claim 13, Hu teaches
A non-transitory computer-readable storage medium, wherein the computer-readable storage medium stores program instructions, and when the program instructions are run by a processor (Hu, page 27, paragraph 0045, “The aspects discussed herein may be described and implemented in the context of non-transitory computer readable storage medium storing computer-executable instructions” where “Still another aspect involves a computer-readable medium including processor-executable instructions configured to implement one aspect of the techniques presented herein” (Hu, page 43, paragraph 0252).),
the operations implemented by the communications system comprises: obtaining, by a first agent deployed at a first protocol layer, related information of a second agent deployed at a second protocol layer, wherein the first protocol layer is different from the second protocol layer, the first agent is deployed on a first function module, and the second agent is deployed on a second function module (Hu, page 28, paragraph 0056, “The system may further include a communication interface 150 which enables the CM3 policy network 140 to be transmitted to other devices, such as a server 160, which may include a CM3 database 162” where “the learning which may occur in stage two may be achieved by sharing data learned by a first agent with a second agent and vice versa (e.g., sharing data learned by the second agent with the first agent)” (Hu, page 30, paragraph 0075) and where “when the CM3 policy network is stored on the storage device of the vehicle, this enables the controller to autonomously drive the vehicle around based on the CM3 policy network 140, and to make autonomous driving decisions based on the CM3 reinforcement learning” (Hu, page 28, paragraph 0059) and Hu, page 2 Figure 1.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that the first agent is the second vehicle controller and the second agent is the first vehicle controller. Examiner further notes that sharing data learned by the first vehicle’s controller with the second vehicle’s controller is obtaining related information of the second agent. Additionally, 176 is the first agent deployed at a first protocol layer 172, and 186 is the second agent deployed at a second protocol layer 182. Examiner further notes that both the first and second vehicle controllers, or agents, are deployed on the vehicles, or function modules.);
and making, by the first agent, an optimized decision on the first function module based on the related information of the second agent (Hu, page 30, paragraph 0080, “The processor 102 or the simulator 108 may generate a CM3 network policy based on the first agent neural network and the second agent neural network….The CM3 network policy may be indicative of data which may be utilized to direct the controller of the autonomous vehicle(s) of FIG. 1 to operation in an autonomous fashion. For example the CM3 network policy may receive an input of an observation associated with the first autonomous vehicle or the second autonomous vehicle ( e.g., a vehicle state or an environment state) and output a suggested action, which may include the no-operation action, the acceleration action, the deceleration action, the shift left one sub-lane action, and the shift right one sub-lane action, similarly to the actions used during simulation and provided by the simulator 108” where “when the CM3 policy network is stored on the storage device of the vehicle, this enables the controller to autonomously drive the vehicle around based on the CM3 policy network 140, and to make autonomous driving decisions based on the CM3 reinforcement learning” (Hu, page 28, paragraph 0059) and where “the simulator 108 may optimize the CM3 network policy based on the local view and the global view”(Hu, page 30, paragraph 0081). Examiner notes that the controller of the autonomous vehicle is the first agent which is in the first vehicle or function module and the related information of the second vehicle’s controller is the data received from a first autonomous vehicle or agent. The action performed is the optimized decision. Additionally, the processor is the processing circuit.)
the first agent is implemented by a processor (Hu, page 26, paragraph 0012, “A vehicle for interaction-aware decision making may include a controller, one or more vehicle systems, and a vehicle communication interface. The controller may include a processor and a memory”)
the first function module is an audio/video coding module located in a communications system (Hu, page 27, paragraph 0044, “A ‘vehicle system’, as used herein, may be any automatic or manual systems that may be used to enhance the vehicle, driving, and/or safety. Exemplary vehicle systems include an autonomous driving system, an electronic stability control system, an anti-lock brake system, …visual devices (e.g., camera systems, proximity sensor systems), …an audio system” and (Hu, page 2 Figure 1 see below.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that the vehicle 170, or first function module, is an audio/video coding module since it has an audio system. Examiner further notes that Figure 1 displays vehicle 170 as part of a communications system.)
and the second function module is different from the first function module (Hu, page 2 Figure 1.
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that vehicle 180, the second function module, is distinct and thus different than vehicle 170.).
Regarding claim 14, claim 14 recites substantially similar limitations to claim 2 and is therefore rejected under the same analysis.
Regarding claim 15, claim 15 recites substantially similar limitations to claim 3 and is therefore rejected under the same analysis.
Regarding claim 16, claim 16 recites substantially similar limitations to claim 4 and is therefore rejected under the same analysis.
Regarding claim 20, Hu teaches
The method according to claim 1, wherein the second function module is a communications module (Hu, page 2 Figure 1
PNG
media_image1.png
880
647
media_image1.png
Greyscale
Examiner notes that Figure 1 displays vehicle 180 as a communications module since it has a communication interface.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 5, 11, 17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu in view of Balakrishnan et al. (US 2021/0258988 A1) (hereafter referred to as Balakrishnan).
Regarding claim 5, Hu teaches, the method according to claim 1 (see 102 rejection of claim 1). Hu also teaches the first function module and the second function module (Hu, page 28, paragraph 0057-0058, “The first vehicle may be equipped with a vehicle communication interface 172, a storage device 174, a controller 176, and one or more vehicle systems …. similarly, the second vehicle 180 may be equipped with a vehicle communication interface 182, a storage device 184, a controller 186, and one or more vehicle systems”. Examiner notes that the first and second vehicles are the first and second function modules.).
Hu does not teach, but Balakrishnan does teach
wherein: the … function module comprises one of a radio link control (RLC) layer function module, a medium access control (MAC) layer function module, or a physical (PHY) layer function module (Balakrishnan, page 16, paragraph 0036, “Millimeter wave communication circuitry 300 may include protocol processing circuitry 305, which may implement one or more of medium access control (MAC)…functions.” Examiner notes that the function module is the millimeter wave communication circuitry.)
and the … function module comprises at least one function module other than the … function module (Balakrishnan, page 16, paragraph 0037, “Millimeter wave communication circuitry 300 may further include digital baseband circuitry 310, which may implement physical layer (PHY) functions.” Examiner notes that the function module is the millimeter wave communication circuitry.)
Hu and Balakrishnan are analogous to the claimed invention because they both use reinforcement learning to train communications devices. It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Hu to use RLC, MAC, or PHY layer function modules in the first and second function modules. Doing so, “minimize[s] interference in the neighboring network, or discover[s] and determine[s] topology of the neighboring network” (Balakrishnan, page 21, paragraph 0223).
Regarding claim 11, claim 11 recites substantially similar limitations to claim 5 and is therefore rejected under the same analysis.
Regarding claim 17, claim 17 recites substantially similar limitations to claim 5 and is therefore rejected under the same analysis.
Regarding claim 19, Hu teaches, the method according to claim 1 (see 102 rejection of claim 1). Hu also teaches the first agent and the second agent (Hu, page 28, paragraph 0057-0058, “The first vehicle may be equipped with a vehicle communication interface 172, a storage device 174, a controller 176, and one or more vehicle systems …. similarly, the second vehicle 180 may be equipped with a vehicle communication interface 182, a storage device 184, a controller 186, and one or more vehicle systems”. Examiner notes that the first and second vehicles are the first and second function modules and the controllers are the first and second agents.).
Hu does not teach, but Balakrishnan does teach
wherein: the … agent [is] deployed at an RLC layer, a MAC layer, or a PHY layer (Balakrishnan, page 16, paragraph 0054, “FIG. 4 is an illustration of protocol functions in accordance with some aspects. The protocol functions may be implemented in a wireless communication device according to some aspects. In some aspects, the protocol layers may include one or more of physical layer (PHY) 410, medium access control layer (MAC) 420, radio link control layer (RLC) 430” Examiner notes that agent is the wireless communication device.)
Hu and Balakrishnan are analogous to the claimed invention because they both use reinforcement learning to train communications devices. It would have been obvious to one of ordinary skill in the art prior to the effective filing date to have modified Hu to use RLC, MAC, or PHY layers in the first and second agents. Doing so, “minimize[s] interference in the neighboring network, or discover[s] and determine[s] topology of the neighboring network” (Balakrishnan, page 21, paragraph 0223).
Response to Arguments
The title objection has been overcome in light the instant amendments.
The previous 112(b) rejections have been overcome in light of the instant amendments.
On pages 10-12, Applicant argues:
The current independent claims are directed to a technical solution to a technical problem. The technical problem is explained in Applicant's Specification [0005] as follows,
A design of dividing a system into modules or a design of dividing a protocol into layers reduces implementation complexity, allows each module/layer to focus on a specific task, and facilitates optimization for each module/layer. However, an interaction relationship between modules or layers is split, and usually, only a local optimal solution is obtained [ emphasis supplied].
As explained in Applicant's Specification [0009],
"According to the foregoing technical solution, different agents may be deployed in different modules of the communications system as required. The agent may obtain related information of an agent configured in another function module other than the function module, and consider coordination between the module and the other module when making a decision, to make an optimal decision" [ emphasis supplied].
As further explained in Applicant's Specification [00103],
in a multimedia communications system, for example, in a cellular network that transmits an audio/video stream service, an audio/video coding module needs to determine parameters such as a bit rate, a frame rate, and resolution for audio/video coding based on factors such as a requirement of a receive end, a software and hardware capability of the audio/video coding module, and communications link quality. A communications module needs to determine solutions such as radio resource usage, channel coding, and a modulation scheme based on factors such as a status (a size, a QoS requirement, and the like) of to-be transmitted data, and radio channel quality. A decision of an audio/video coding module affects the status of the to-be transmitted data received by the communications module. On the other hand, a decision of the communications module also affects communications link quality information that can be obtained by the audio/video coding module. An agent may be deployed in each of two modules. Interaction and coordination are performed between the modules based on a multi-agent reinforcement learning framework, to adapt to an environment change [ emphasis supplied].
The solution in the present application and claims is described as deploying agents on different modules and at different protocol layers and coordinating between modules to make an optimal decision. Applicant's Specification [0009], "According to the foregoing technical solution, different agents may be deployed in different modules of the communications system as required The agent may obtain related information of an agent configured in another function module other than the function module, and consider coordination between the module and the other module when making a decision, to make an optimal decision"; see also at least Claim 1. This solution is further enhanced by deploying the modules across different layers, as shown in FIG. 6, reproduced below, and recited in the currently amended independent claims, because it also allows for cross layer coordination.
…
As shown in FIG. 6 and recited in the independent claims, Agent 1 and Agent 2 communicate with each other across layers, which allows for determination of an optimal decision. See at least Applicant's Specification [0097} and Claim 1.
Because the independent claims recite a technical solution to a technical problem (they integrate any alleged abstract idea into a practical application and are not directed to an abstract idea. Therefore, the Applicant respectfully requests withdrawal of the 35 U.S.C. § 101 rejections of independent claim 1, 7 and 13, and withdrawal of the 35 U.S.C.§ 101 rejections of the dependent claims, at least because they depend on the independent claims.
Regarding the Applicant’s argument that claims recite a technical solution to a technical problem, Examiner respectfully disagrees. Specifically, Examiner notes that “making an optimal decision” as in claim 1 recites an improvement to the mental process of “making… a decision”. Examiner respectfully notes that improvements must come from the additional elements (MPEP 2106.04 (d)(I)). Examiner further notes that an improvement to an abstract idea still results in an abstract idea. Examiner additionally notes that the claimed agents are used to apply the mental process to a computer and thus cannot provide an improvement (see MPEP 2106.05(f)).). Lastly, Examiner notes that “interaction and coordination are performed between the modules based on a multi-agent reinforcement learning framework, to adapt to an environment change” is not claimed. Examiner further respectfully notes that if this were to be claimed, it would be using a computer as a tool to perform the abstract idea and thus does not recite a technical solution to a technical problem (see MPEP 2106.05(f)).
On pages 12-14, Applicant argues:
Hu does not teach, at least, "the first agent is deployed on a first function module ... the first function module is an audio/video coding module located in a communications system", as recited in the independent claims
In the FOA, "Examiner notes that the first agent is 178 (this appears to be a typo, see OA page 24, "176 is the first agent") and the first function module is 170. Similarly, the second agent is 186 and the second function module is 180", and the Examiner points to Hu FIG. 1, reproduced below. FOA page 26.
…
As clearly, seen from Hu. FIG. 1 and [0057], the controllers 176 and 186 are on vehicles 170 and 180. However, the independent claims require the first agent to be deployed on an audio/video coding module in a communications system and vehicles 170 and 180 are clearly not audio/video coding modules. See also Hu [0057], "The first vehicle may be equipped with a vehicle communication interface 172, a storage device 17 4, a controller 176, and one or more vehicle systems 178." Therefore, the Applicant respectfully requests withdrawal of the 35 U.S.C. § 102(a)(l) rejections of claims 1, 7 and 13, and withdrawal of the art rejections of the dependent claims at least because they depend on the independent claims.
Regarding the Applicant’s argument that the prior art of record does not teach “the first agent is deployed on a first function module…the first function module is an audio/video coding module located in a communications system”, the Examiner respectfully disagrees. Specifically, Examiner notes that vehicle 170 is the first function module. Examiner further notes that since the vehicle has an audio system (Hu, page 27, paragraph 0044), by broadest reasonable interpretation, it is an audio coding module. Examiner further notes that since the vehicle communicates to the server, it is located in a communications system. Examiner respectfully points the applicant to the above 102 rejections.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang et al. (“Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms”) also discusses communication systems with multiple agents.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.R.H./ Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148