DETAILED ACTION
This Office action is in reply to application no. 18/317,112, filed 15 May 2023. Claims 1-20 are pending and are considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Each claim includes an optimization step, such as in claim 1: “optimizing an offer to the target negotiating party” in no particular manner but during a particular time frame. The originally-filed application gives no specific detail at all as to how this optimization is carried out. It is broadly claimed and would encompass any and all means for performing such an optimization.
Optimization is only known to be solvable in limited, specific circumstances, unlike what is claimed here. Nesterov in his monograph “Lectures on Convex Optimization” puts it directly and bluntly: “the main fact, which should be known to any person dealing with optimization models, is that in general, optimization problems are unsolvable.” [emphasis in the original]
See MPEP § 2161.01(I): “original claims may lack written description when the claims define the invention in functional language specifying a desired result but the specification does not sufficiently describe how the function is performed or the result is achieved. For software, this can occur when the algorithm or steps/procedure for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient). In other words, the algorithm or steps/procedure taken to perform the function must be described with sufficient detail so that one of ordinary skill in the art would understand how the inventor intended the function to be performed.” [emphasis in the original]
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The claims all include a step of training a neural architecture “using only the received negotiation traces”, but the specification does not seem to allow for this. In para. 42, that language is given verbatim, but earlier, in para. 38, input parameters are also used to train the neural architecture; this is within the same process (referred to overall as item 300 in the drawings) and is not described in such a way as to make it optional. Also, claim 2 directly contradicts claim 1, for example, in this regard.
Further, using only a training dataset is insufficient to train a neural network. According to Rosebrock (and well known to those of ordinary skill in the relevant art), training a neural network requires, at a minimum, four things: a training dataset, a predetermined model or architecture, a loss function, and some particular optimization method.
For the purpose of compact prosecution, the word “only” will be disregarded as it creates an irreconcilable inconsistency with the specification itself and with what is known in the art about such training.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) a data gathering step (receiving previous offers), training a neural architecture in no particular manner (which will be discussed below), and optimizing an offer to a party in no particular manner.
Gathering data and attempting to optimize an offer are fundamental business practices and commercial interactions, each of which is among the “certain methods of organizing human activity” deemed abstract. Also, setting aside momentarily the neural network limitation, the essential functions can be performed mentally. A business negotiator can gather data and, if the number of options is limited to a small number, can optimize an offer to a party. None of this presents any practical difficulty, and none requires any technology at all.
This judicial exception is not integrated into a practical application because aside from the bare inclusion of a generic computer and nondescript use of well-known AI elements, discussed below, nothing is done beyond what was set forth above, which does not go beyond generally linking the abstract idea to the technological environment of AI-enabled, generic computers. See MPEP § 2106.05(h).
As the claims only manipulate data pertaining to negotiations of offers, they do not improve the “functioning of a computer” or of “any other technology or technical field”. See MPEP § 2106.05(a). They do not apply the abstract idea “with, or by use of a particular machine”, MPEP § 2106.05(b), as the below-cited Guidance is clear that a generic computer is not the particular machine envisioned.
They do not effect a “transformation or reduction of a particular article to a different state or thing”, MPEP § 2106.05(c). First, such data, being intangible, are not a particular article at all. Second, the claimed manipulation is neither transformative nor reductive; as the courts have pointed out, in the end, data are still data.
They do not apply the abstract idea “in some other meaningful way beyond generally linking [it] to a particular technological environment”, MPEP § 2106.05(e), as the lack of technical and algorithmic detail in the claims is so as not to go beyond such a general linkage.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional claim limitations, considered individually and as an ordered combination, are insufficient to elevate an otherwise-ineligible claim.
The claim includes a processor executing instructions stored on a medium. These elements are recited at a high degree of generality, and the specification does not meaningfully limit them, such that the claims encompass a generic computer. It only performs generic computer functions of nondescriptly manipulating data and sharing data with persons and/or other devices.
Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. The type of information being manipulated does not impose meaningful limitations or render the idea less abstract. In light of Recentive1, the use of known machine-learning techniques where the only possible novelty is the type of data used, is not per se sufficient.
The claim limitations when considered as an ordered combination – a generic computer performing a sequence of abstract steps while making use of known AI techniques – do nothing more than when they are analyzed individually. The other independent claims are simply different embodiments but are likewise directed to a generic computer performing, essentially, the same process.
The dependent claims further do not amount to significantly more than the abstract idea: claims 2, 6, 7, 9, 13, 14, 16, 19 and 20 are simply further descriptive of the type of information being manipulated. Claims 3 and 10 consist entirely of a mere duplication of parts, of no patentable significance. Claims 4 and 11 simply recite making a choice, and claims 5, 12, 17 and 18 simply recite further, abstract manipulation of data.
The claims are not patent eligible. For further guidance please see MPEP § 2106.03 – 2106.07(c) (formerly referred to as the “2019 Revised Patent Subject Matter Eligibility Guidance”, 84 Fed. Reg. 50, 55 (7 January 2019)).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 8, 10 and 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Moore (U.S. Publication No. 2014/0172522).
With regard to Claim 1:
A system for selecting and using a neural architecture comprising:
a non-transitory computer readable medium configured to store instructions thereon; [0042; a memory stores instructions for execution by a CPU] and
a processor connected to the non-transitory computer readable medium, [id.] wherein the processor is configured to execute the instructions for:
receiving negotiation traces, wherein the negotiation traces include offers from previous negotiations with a target negotiating party; [0009; an “indication of interest” is “received” from a “first party” based on “offer prices” generated by a “neural network” which has been trained “based on the previous behaviour” of that party or other parties]
training the neural architecture using only the received negotiation traces; [id.] and
optimizing an offer to the target negotiating party during a negotiation using the trained neural architecture. [0055; a “target maximum discount” is offered]
In this and the subsequent claims, that negotiation traces “include offers from previous negotiations with a target negotiating party” is considered but given no patentable weight. First, it consists entirely of nonfunctional, descriptive language which imparts neither structure nor functionality to any claimed embodiment. Second, as the traces only include these, they can include other information, and any further processing can be based entirely on the other information. The phrase “negotiation traces”, consistent with the instant specification, see e.g. para. 12, are interpreted as individual steps in a negotiation.
With regard to Claim 3:
The system of claim 1, wherein the processor is configured to execute the instructions for:
training a plurality of neural architectures, wherein the plurality of neural architectures includes the neural architecture. [0072; multiple neural networks may be trained]
This claim is not patentably distinct from claim 1 as it consists entirely of a mere duplication of parts, which is of no patentable significance as no new and unexpected result is inherent or disclosed. See MPEP § 2144.04(VI)(B). The reference is provided for the purpose of compact prosecution.
With regard to Claim 8:
A method of selecting and using a neural architecture comprising:
receiving negotiation traces, wherein the negotiation traces include offers from previous negotiations with a target negotiating party; [0009; an “indication of interest” is “received” from a “first party” based on “offer prices” generated by a “neural network” which has been trained “based on the previous behaviour” of that party or other parties]
training the neural architecture using only the received negotiation traces; [id.] and
optimizing an offer to the target negotiating party during a negotiation using the trained neural architecture. [0055; a “target maximum discount” is offered]
With regard to Claim 10:
The method of claim 8, further comprising:
training a plurality of neural architectures, wherein the plurality of neural architectures includes the neural architecture. [0072; multiple neural networks may be trained]
This claim is not patentably distinct from claim 8 as it consists entirely of a mere duplication of parts, which is of no patentable significance as no new and unexpected result is inherent or disclosed. See MPEP § 2144.04(VI)(B). The reference is provided for the purpose of compact prosecution.
With regard to Claim 15:
A non-transitory computer readable medium configured to store instructions [0042; a memory stores instructions for execution by a CPU] for selecting and using a neural architecture, wherein the instructions are configured to cause a processor to execute operations comprising:
receiving negotiation traces, wherein the negotiation traces include offers from previous negotiations with a target negotiating party; [0009; an “indication of interest” is “received” from a “first party” based on “offer prices” generated by a “neural network” which has been trained “based on the previous behaviour” of that party or other parties]
training the neural architecture using only the received negotiation traces; [id.] and
optimizing an offer to the target negotiating party during a negotiation using the trained neural architecture. [0055; a “target maximum discount” is offered]
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Moore in view of Caglar (U.S. Publication No. 2020/0387565).
In-line citations are to Moore. These claims are similar so are analyzed together.
With regard to Claim 2:
The system of claim 1, wherein the processor is further configured to execute the instructions for:
generating the neural architecture based on received architecture parameters, wherein the received architecture parameters include an architecture width and an architecture depth, and training the neural architecture comprises training the neural architecture generated based on the received architecture parameters.
With regard to Claim 9:
The method of claim 8, further comprising:
generating the neural architecture based on received architecture parameters, wherein the received architecture parameters include an architecture width and an architecture depth, and training the neural architecture comprises training the neural architecture generated based on the received architecture parameters.
With regard to Claim 16:
The non-transitory computer readable medium of claim 15, wherein the operations further comprise:
generating the neural architecture based on received architecture parameters, wherein the received architecture parameters include an architecture width and an architecture depth, and training the neural architecture comprises training the neural architecture generated based on the received architecture parameters.
Moore teaches the system of claim 1, method of claim 8 and medium of claim 15, but does not explicitly teach these parameters, but though the details are of no patentable significance as explained below, it is known in the art. Caglar teaches a system of computational model optimizations [title] in which a “neural network” is trained “using backpropagation”. [0069] “Parameters” within a neural network may include a “number of layers”, which reads on a depth, and a “convolutional kernel width”. [0097] Caglar and Moore are analogous art as each is directed to electronic means for training a neural network in order to try to optimize some quantity.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Caglar with that of Moore in order to increase modeling efficiency, as taught by Caglar; [0002] further, it is simply a substitution of one known part for another with predictable results, simply using Caglar’s parameters in place of, or in addition to, those of Moore; the substitution produces no new and unexpected result.
That “received architecture parameters include an architecture width and an architecture depth” is considered but given no patentable weight. First, it consists entirely of nonfunctional, descriptive language which imparts neither structure nor functionality to any claimed embodiment. Second, as the parameters only include these, they can include other information, and any further processing can be based entirely on the other information.
Claim(s) 4, 11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Moore in view of Bachrach et al. (WIPO Publication No. 2023/217868, filed 10 May 2023).
These claims are similar so are analyzed together.
With regard to Claim 4:
The system of claim 3, wherein the processor is configured to execute the instructions for:
selecting the neural architecture as an optimal neural network from among the plurality of neural architectures based on simulated negotiations amongst the plurality of neural architectures.
With regard to Claim 11:
The method of claim 10, further comprising:
selecting the neural architecture from among the plurality of neural architectures as an optimal neural network based on simulated negotiations amongst the plurality of neural architectures.
With regard to Claim 17:
The non-transitory computer readable medium of claim 15, wherein the operations further comprise:
training a plurality of neural architectures, wherein the plurality of neural architectures includes the neural architecture; [0072; multiple neural networks may be trained] and
selecting the neural architecture from among the plurality of neural architectures as an optimal neural network based on simulated negotiations amongst the plurality of neural architectures.
Moore teaches the system of claim 3, method of claim 10, and medium of claim 15, but does not explicitly teach this basis for a selection, but it is known in the art. Bachrach teaches a contract negotiation system [title] which selects a particular agent within a multi-agent system to perform a particular action; [0028] the multi-agent systems “use neural networks”. [0002] The agents learn “by simulating negotiation processes” of “other agents”. [0081] Bachrach and Moore are analogous art as each is directed to the use of neural networks in a negotiation process.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Bachrach with that of Moore in order to improve efficiency, as taught by Bachrach; [abstract] further, it is simply a substitution of one known part for another with predictable results, simply using a neural network from a plurality of such networks on the basis of Bachrach rather than, or in addition to, that of Moore; the substitution produces no new and unexpected result.
Claim(s) 5-7, 12-14 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Moore in view of Dey (U.S. Publication No. 2022/0076288).
Claims 5, 12 and 18 are similar so are analyzed together.
With regard to Claim 5:
The system of claim 1, wherein the processor is configured to execute the instructions for:
determining, during the negotiation, whether an accuracy of the trained neural architecture is satisfactory.
With regard to Claim 12:
The method of claim 8, further comprising:
determining, during the negotiation, whether an accuracy of the trained neural architecture is satisfactory.
With regard to Claim 18:
The non-transitory computer readable medium of claim 15, wherein the operations further comprise:
determining, during the negotiation, whether an accuracy of the trained neural architecture is satisfactory.
Moore teaches the system of claim 1, method of claim 8, and medium of claim 15, including performing steps during a negotiation process using a neural network, but does not explicitly teach determining a satisfactory accuracy, but it is known in the art. Dey teaches a system for managing a reward program. [title] It performs a negotiation, [0003] and uses a neural network in which it determines whether the “accuracy” of the neural network is “sufficient” with a certain set of parameters. [0155] If so it outputs a result and incorporates the parameters for developing “new machine learning models”, [id.] and if the “accuracy is insufficient” it can be improved. [id.] Dey and Moore are analogous art as each is directed to the use of machine learning in conducting negotiations.
It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Dey with that of Moore in order to improve accuracy, as taught by Dey; further, it is simply a substitution of one known part for another with predictable results, simply using Dey’s determination of accuracy as a trigger to start deploying a model rather than the basis of Moore; the substitution produces no new and unexpected result.
With regard to Claim 6:
The system of claim 5, wherein the processor is configured to execute the instructions for:
optimizing a next offer, after the offer, using a different trained neural architecture in response to a determination that the accuracy of the trained neural architecture is unsatisfactory. [Dey, as cited above in regard to claim 5; this is simply a matter of using Dey’s process in Moore’s optimization, a substitution of one known part for another with predictable results]
With regard to Claim 7:
The system of claim 5, wherein the processor is configured to execute the instructions for:
optimizing a next offer, after the offer, using the trained neural architecture in response to a determination that the accuracy of the trained neural architecture is satisfactory. [Dey, as cited above in regard to claim 5; this is simply a matter of using Dey’s process in Moore’s optimization, a substitution of one known part for another with predictable results]
With regard to Claim 13:
The method of claim 12, further comprising:
optimizing a next offer, after the offer, using a different trained neural architecture in response to a determination that the accuracy of the trained neural architecture is unsatisfactory. [Dey, as cited above in regard to claim 12; this is simply a matter of using Dey’s process in Moore’s optimization, a substitution of one known part for another with predictable results]
With regard to Claim 14:
The method of claim 12, further comprising:
optimizing a next offer, after the offer, using the trained neural architecture in response to a determination that the accuracy of the trained neural architecture is satisfactory. [Dey, as cited above in regard to claim 12; this is simply a matter of using Dey’s process in Moore’s optimization, a substitution of one known part for another with predictable results]
With regard to Claim 19:
The non-transitory computer readable medium of claim 18, wherein the operations further comprise:
optimizing a next offer, after the offer, using a different trained neural architecture in response to a determination that the accuracy of the trained neural architecture is unsatisfactory. [Dey, as cited above in regard to claim 18; this is simply a matter of using Dey’s process in Moore’s optimization, a substitution of one known part for another with predictable results]
With regard to Claim 20:
The non-transitory computer readable medium of claim 18, wherein the operations further comprise:
optimizing a next offer, after the offer, using the trained neural architecture in response to a determination that the accuracy of the trained neural architecture is satisfactory. [Dey, as cited above in regard to claim 18; this is simply a matter of using Dey’s process in Moore’s optimization, a substitution of one known part for another with predictable results]
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT C ANDERSON whose telephone number is (571)270-7442. The examiner can normally be reached M-F 9:00 to 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT C ANDERSON/Primary Examiner, Art Unit 3694
1 Recentive Analytics, Inc. v. Fox Corp. et al., 134 F.4th 1205, 1216 (Fed. Cir. 2025)