Detailed Action
This Office Action is in response to the remarks entered on 12/24/2025. New Claims 11-15 are added. Claims 1-15 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/24/2025 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the claim recites “responsive to receiving the user-specified data set and the user-specified task, subsequent to training the first machine learning model and during execution of the first machine learning model” in line 21-23. It is unclear what constitutes ‘receiving … during execution of the first machine learning model.’ The machine learning engine already received the data and generated the first machine learning model based on the data. Is the machine learning engine receiving the same data again? Or reapplying the same data to another model generation?
For purpose of the examination, the limitation is interpreted to mean: the machine learning engine is generating/deploying one machine learning model at a time based on the same dataset.
Claims 2-5 depend from claim 1. Therefore, the claims inherit the same deficiency.
Claim 6 is a non-transitory, computer-readable medium claim which implements the same features as the method claim 1, and is rejected for at least the same reasons. Claims 7-10 depend from claim 6. Therefore, the claims inherit the same deficiency.
Claim 11 is a system claim which implements the same features as the method claim 1, and is rejected for at least the same reasons. Claims 12-15 depend from claim 11. Therefore, the claims inherit the same deficiency.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1,
Step 1: Claim 1 recites a method for dynamically generating a plurality of machine learning models for processing a user-specified data set. Therefore, it is directed to the statutory category of processes.
2A Prong 1:
analyzing,
selecting, the at least one characteristic of the user-specified task; (a mental process of judgment which can be performed in one’s mind. The broadest reasonable interpretation of the limitation encompasses a person (e.g., a programmer) selecting an encoder based on the data set)
to generate a first output by processing at least one encoding of the user-specified data set (mental process of evaluation – generating the output by processing the user-specified data involves selecting data and making inferences, which can be performed with the aid of pen and paper)
to generate at least a second output by processing the at least one encoding of the user- specified data set (mental process of evaluation – generating the output by processing the user-specified data involves selecting data and making inferences, which can be performed with the aid of pen and paper)
2A Prong 2:
A method for dynamically generating a plurality of machine learning models for processing a user-specified data set, the method comprising: (mere instructions to apply an exception using a computer MPEP 2106.05(f))
receiving a user-specified data set and a user-specified task; (an insignificant extra-solution activity MPEP 2106.05(g) of gathering statistics)
analyzing, by a machine learning engine (mere instructions to apply an exception using a computer MPEP 2106.05(f))
selecting, by the machine learning engine (mere instructions to apply an exception using a computer MPEP 2106.05(f))
directing, by the machine learning engine, each of the selected plurality of encoders to encode the received user-specified data set; (an additional element according to MPEP 2106.05(f) mere instructions to apply an exception using a computer. The limitation merely recites applying a machine learning model to process input data and to generate output data)
generating, by the machine learning engine, a first machine learning model for processing the user-specified data set, wherein generating the first machine learning model is based upon the at least one characteristic of the user-specified data set and the at least one characteristic of the user-specified task; (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
training, by the machine learning engine, the first machine learning model; (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites generic training process of a neural network to implement the judicial exception on a computer)
directing, by the machine learning engine, the first machine learning model to generate
generating, by the machine learning engine, a second machine learning model based upon the at least one characteristic of the user- specified data set and the at least one characteristic of the user-specified task, responsive to receiving the user-specified data set and the user-specified task, subsequent to training the first machine learning model and during execution of the first machine learning model; and (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
directing, by the machine learning engine, the second machine learning model to
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity, combination of generic computer functions that are restricted to field of use are implemented to perform the disclosed abstract idea above.
2B:
A method for dynamically generating a plurality of machine learning models for processing a user-specified data set, the method comprising: (mere instructions to apply an exception using a computer MPEP 2106.05(f))
receiving, by a machine learning engine, a user-specified data set and a user-specified task; (indicated as an insignificant extra-solution activity MPEP 2106.05(g) in Step 2A Prong 2. Thus, the limitation is re-evaluated in Step 2B as well understood, routine, and conventional activity MPEP 2106.05(d) of Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362.)
analyzing, by a machine learning engine (mere instructions to apply an exception using a computer MPEP 2106.05(f))
selecting, by the machine learning engine (mere instructions to apply an exception using a computer MPEP 2106.05(f))
directing, by the machine learning engine, each of the selected plurality of encoders to encode the received user-specified data set; (an additional element according to MPEP 2106.05(f) mere instructions to apply an exception using a computer. The limitation merely recites applying a machine learning model to process input data and to generate output data)
generating, by the machine learning engine, a first machine learning model for processing the user-specified data set, wherein generating the first machine learning model is based upon the at least one characteristic of the user-specified data set and the at least one characteristic of the user-specified task; (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
training, by the machine learning engine, the first machine learning model; (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites generic training process of a neural network to implement the judicial exception on a computer)
directing, by the machine learning engine, the first machine learning model to generate
generating, by the machine learning engine, a second machine learning model based upon the at least one characteristic of the user- specified data set and the at least one characteristic of the user-specified task, responsive to receiving the user-specified data set and the user-specified task, subsequent to training the first machine learning model and during execution of the first machine learning model; and (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
directing, by the machine learning engine, the second machine learning model to generate
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions and usage of elements that are restricted to field of use that are implemented to perform the disclosed abstract idea above.
Regarding claim 2,
Step 1: Processes, as above.
2A Prong 1: Incorporates the rejection of claim 1.
2A Prong 2: wherein generating the first machine learning model further comprises generating a neural network. (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
2B: wherein generating the first machine learning model further comprises generating a neural network. (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
Regarding claim 3,
Step 1: Processes, as above.
2A Prong 1: Incorporates the rejection of claim 1.
2A Prong 2: wherein generating the second machine learning model further comprises generating a neural network. (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
2B: wherein generating the second machine learning model further comprises generating a neural network. (mere instructions to apply an exception using a computer MPEP 2106.05(f). The limitation merely recites setting a machine learning model based on the user data set and the task to implement the generating and encoding process on a computer)
Regarding claim 4,
Step 1: Processes, as above.
2A Prong 1: Incorporates the rejection of claim 1.
2A Prong 2: further comprising providing, by the machine learning engine, access to at least one output selected from the group consisting of the first output and the second output. (insignificant extra-solution activity MPEP 2106.05(g) of presenting offer. The broadest reasonable interpretation of the limitation encompasses the machine learning engine displaying and/or providing access to output data to user and/or other devices)
2B: further comprising providing, by the machine learning engine, access to at least one output selected from the group consisting of the first output and the second output. (indicated as an insignificant extra-solution activity MPEP 2106.05(g) in Step 2A Prong 2. Thus, the limitation is re-evaluated in Step 2B as well understood, routine and conventional activity MPEP 2106.05(d) of receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362. The broadest reasonable interpretation of the limitation encompasses the machine learning engine transmitting data to other devices)
Regarding claim 5,
Step 1: Processes, as above.
2A Prong 1: further comprising directing,
2A Prong 2: directing, by the machine learning engine, (mere instructions to apply an exception using a computer MPEP 2106.05(f))
2B: directing, by the machine learning engine, (mere instructions to apply an exception using a computer MPEP 2106.05(f))
Regarding claim 6,
Step 1: Claim 6 recites a non-transitory, computer-readable medium comprising instructions tangibly stored on the non-transitory computer- readable medium. Therefore, it is directed to the statutory category of a machine.
2A Prong 1: Claim 6 is a computer readable medium claim which implements the same features as the method claim 1, and is rejected for at least the same reasons.
2A Prong 2: A non-transitory, computer-readable medium comprising instructions tangibly stored on the non-transitory computer- readable medium, wherein the instructions are executable by at least one processor to perform a method for dynamically generating a plurality of machine learning models for processing a user-specified data set, the method comprising: (mere instructions to apply an exception using a computer MPEP 2106.05(f))
2B: A non-transitory, computer-readable medium comprising instructions tangibly stored on the non-transitory computer- readable medium, wherein the instructions are executable by at least one processor to perform a method for dynamically generating a plurality of machine learning models for processing a user-specified data set, the method comprising: (mere instructions to apply an exception using a computer MPEP 2106.05(f))
Claim 7 is a computer readable medium claim which implements the same features as the method claim 2, and is rejected for at least the same reasons.
Claim 8 is a computer readable medium claim which implements the same features as the method claim 3, and is rejected for at least the same reasons.
Claim 9 is a computer readable medium claim which implements the same features as the method claim 4, and is rejected for at least the same reasons.
Claim 10 is a computer readable medium claim which implements the same features as the method claim 5, and is rejected for at least the same reasons.
Regarding claim 11,
Step 1: Claim 11 recites a machine learning system, comprising: a user interface. Therefore, it is directed to the statutory category of a machine.
2A Prong 1: Claim 11 is a machine learning system claim which implements the same features as the method claim 1, and is rejected for at least the same reasons.
2A Prong 2: a user interface, configured to select: a user-specified data set; and a user-specified task; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f))
a memory, storing instructions for dynamically generating a plurality of machine learning models for processing a user-specified data set; and (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f))
a processor configured to communicate data with the user interface and the memory, the processor further configured to execute the instructions to: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f))
2B: a user interface, configured to select: a user-specified data set; and a user-specified task; (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f))
a memory, storing instructions for dynamically generating a plurality of machine learning models for processing a user-specified data set; and (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f))
a processor configured to communicate data with the user interface and the memory, the processor further configured to execute the instructions to: (mere instructions to apply an exception using a generic computer component MPEP 2106.05(f))
Claim 12 is a machine learning system claim which implements the same features as the method claim 2, and is rejected for at least the same reasons.
Claim 13 is a machine learning system claim which implements the same features as the method claim 3, and is rejected for at least the same reasons.
Claim 14 is a machine learning system claim which implements the same features as the method claim 4, and is rejected for at least the same reasons.
Claim 15 is a machine learning system claim which implements the same features as the method claim 5, and is rejected for at least the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 9939792 B2, hereinafter ‘Chen’) in view of Amini (US 20190261243 A1, hereinafter ‘Amini’).
Regarding claim 1, Chen teaches:
A method for dynamically generating a plurality of machine learning models for processing a [Chen, col 4, line 24-41] The controller forwards the task and the data to the neural network and trains the neural network based on the task feature and the data feature. [Chen, col 2, line 18-23] teaches there are plurality of neural networks in the learning module, which may include the first and the second machine learning model)
receiving a [Chen, col 8, line 44-48] Controller 210 receives task feature and input feature information 920 which is associated with a task, which are interpreted as user data set)
analyzing, by a machine learning engine, at least one characteristic of the [Chen, col 6, line 40-47] discloses input feature and task feature information is forwarded to learning module 620, which contains a neural network. [Chen, col 6, line 51-67] The artificial neural network analyze the features)
generating, by the machine learning engine, a first machine learning model for processing the user-specified data set, wherein generating the first machine learning model is based upon the at least one characteristic of the the at least one characteristic of the task; ([Chen, col 4, line 24-41] The controller forwards the task and the data to the neural network and trains the neural network based on the task feature and the data feature. [Chen, col 2, line 18-23] teaches there are plurality of neural networks in the learning module, which may include the first and the second machine learning model)
training, by the machine learning engine, the first machine learning model; ([Chen, col 4, line 24-41] The controller (i.e., the machine learning engine) forwards the task and the data to the neural network and trains the neural network based on the task feature and the data feature. [Chen, col 2, line 18-23] teaches there are plurality of neural networks in the learning module, which includes the first and the second machine learning model)
directing, by the machine learning engine, the first machine learning model to generate a first output by processing [Chen, col 8, line 18-59] The controller 210 (i.e., the machine learning engine) performs a learning operation and produces the pairing of sample definition features and corresponding execution mode selection results. The controller 210 receives task feature (i.e., task) and input feature information (i.e., data set) 910. The trial run of the first task is executed sequentially using CPU 231 (directing the first ML model to generate a first output). Additional current task feature and input feature information (i.e., the same dataset as before) are received, and the controller makes a change to another execution mode if the parallel execution mode is not performing appropriately based upon the trial run (generating the second machine learning model if the first mode is not performing appropriately) )
generating, by the machine learning engine, a second machine learning model based upon the at least one characteristic of the the at least one characteristic of the [Chen, col 8, line 18-59] The controller 210 (i.e., the machine learning engine) performs a learning operation and produces the pairing of sample definition features and corresponding execution mode selection results. The controller 210 receives task feature (i.e., task) and input feature information (i.e., data set) 910. The trial run of the first task is executed sequentially using CPU 231 (directing the first ML model to generate a first output). Additional current task feature and input feature information (i.e., the same dataset as before) are received, and the controller makes a change to another execution mode if the parallel execution mode is not performing appropriately based upon the trial run (generating the second machine learning model if the first mode is not performing appropriately). [Chen, col5, line 66 - col 6, line 5] and [Chen, col 3, line 66 - col 4, line 7] further supports that the trial run may be run while running the first trial run and can be launched when a system starts running)
directing, by the machine learning engine, the second machine learning model to generate at least a second output by processing [Chen, col 8, line 18-59] The controller 210 (i.e., the machine learning engine) performs a learning operation and produces the pairing of sample definition features and corresponding execution mode selection results. The controller 210 receives task feature (i.e., task) and input feature information (i.e., data set) 910. The trial run of the first task is executed sequentially using CPU 231 (directing the first ML model to generate a first output). Additional current task feature and input feature information (i.e., the same dataset as before) are received, and the controller makes a change to another execution mode if the parallel execution mode is not performing appropriately based upon the trial run (directing the method to generate the second machine learning model and generate output if the first mode is not performing appropriately) )
Chen does not specifically disclose:
processing user-specified data set; receiving a user-specified data set and a user-specified task;
selecting, by the machine learning engine, a plurality of encoders based upon the at least one characteristic of the user-specified data set and at least one characteristic of the user-specified task;
directing, by the machine learning engine, each of the selected plurality of encoders to encode the received user-specified data set;
directing, by the machine learning engine, to generate output by processing the at least one encoding of the user- specified data set;
Amini teaches:
processing user-specified data set; receiving a user-specified data set and a user-specified task; ([Amini, 0068] A user may specify the type of scene a camera is intended to capture (i.e., the user-specified task) and input information to the system indicating where the camera is installed and input indicative of deployment characteristics of the camera system (i.e., the user specified data set), and the user may also update entered information by providing new inputs via a similar interface during operation of the camera system. [Amini, 0070] The received encoded data are provided to the machine learning techniques such as deep learning and neural networks to detect physical objects in the captured video)
selecting, by the machine learning engine, a plurality of encoders based upon the at least one characteristic of the user-specified data set and at least one characteristic of the user-specified task ([Amini, 0075] discloses machine learning techniques (i.e., the machine learning engine) can be implemented to select a channel from a set of available channels based on how certain combinations of network conditions and encoding schemes impact a quality level of a resulting video stream. [Amini, 0061 and 0067] collectively discloses that an encoder is selected to encode video captured by camera 610a and encoder parameters are important to selecting an appropriate channel [Amini, 0068] A user may specify the type of scene a camera is intended to capture (i.e., the user-specified task) and input information to the system indicating where the camera is installed and input indicative of deployment characteristics of the camera system (i.e., the user specified data set), and the user may also update entered information by providing new inputs via a similar interface during operation of the camera system. [Amini, 0070] The received encoded data are provided to the machine learning techniques such as deep learning and neural networks to detect physical objects in the captured video);
directing, by the machine learning engine, each of the selected plurality of encoders to encode the received user-specified data set ([Amini, 0075] discloses machine learning techniques (i.e., the machine learning engine) can be implemented to select a channel from a set of available channels based on how certain combinations of network conditions and encoding schemes impact a quality level of a resulting video stream. [Amini, 0061 and 0067] collectively discloses that an encoder is selected to encode video captured by camera 610a and encoder parameters are important to selecting an appropriate channel. [Amini, 0070] The received encoded data are provided to the machine learning techniques such as deep learning and neural networks to detect physical objects in the captured video);
directing, by the machine learning engine, to generate output by processing the at least one encoding of the user- specified data set; ([Amini, 0075] discloses machine learning techniques (i.e., the machine learning engine) can be implemented to select a channel from a set of available channels based on how certain combinations of network conditions and encoding schemes impact a quality level of a resulting video stream. [Amini, 0061 and 0067] collectively discloses that an encoder is selected to encode video captured by camera 610a and encoder parameters are important to selecting an appropriate channel. [Amini, 0070] The received encoded data are provided to the machine learning techniques such as deep learning and neural networks to detect physical objects in the captured video)
Before the effective filing date of the invention to a person of ordinary skill in the art, it would have been obvious, having the teachings of Chen and Amini to use the method of selecting an encoder to encode the data of Fallon to implement the machine learning system of Chen. The suggestion and/or motivation to do so is to improve the performance of the system, as compressing the data using a selected encoder (an encoder specializing in encoding specific data) reduces the amount of data processed by each of the machine learning model.
Regarding claim 6, Chen teaches:
A non-transitory, computer-readable medium comprising instructions tangibly stored on the non-transitory computer- readable medium, wherein the instructions are executable by at least one processor to perform a method for dynamically generating a plurality of machine learning models for processing a user-specified data set, the method comprising ([Chen, col 8, line 59-67] discloses the system of Chen is performed using computer-readable storage media. [Chen, col 6, line 27-33] Learning module 620 establishes a relationship between the heuristic information for execution modes to corresponding task feature set information and input feature set information. This process corresponds to directing the machine learning model to generate the output. Information regarding the relationships is forwarded to execution mode selection module 630 which establishes solutions ( e.g., solution 1, 2, and 3, etc.) based on the relationship information. [Chen, col 2, line 18-23] teaches there are plurality of neural networks in the learning module, which may include the first and the second machine learning model).
Claim 6 is a computer readable medium claim which implements the same features as the method claim 1, and is rejected for at least the same reasons.
Regarding claim 2, Chen teaches:
wherein generating the first machine learning model further comprises generating a neural network ([Chen, col 4, line 24-41] The controller forwards the task and the data to the neural network and trains the neural network based on the task feature and the data feature. [Chen, col 2, line 18-23] teaches there are plurality of neural networks in the learning module, which may include the first and the second machine learning model).
Claim 7 is a computer readable medium claim which implements the same features as the method claim 2, and is rejected for at least the same reasons.
Regarding claim 3, Chen teaches:
wherein generating the second machine learning model further comprises generating a neural network ([Chen, col 4, line 24-41] The controller forwards the task and the data to the neural network and trains the neural network based on the task feature and the data feature. [Chen, col 2, line 18-23] teaches there are plurality of neural networks in the learning module, which may include the first and the second machine learning model).
Claim 8 is a computer readable medium claim which implements the same features as the method claim 3, and is rejected for at least the same reasons.
Regarding claim 4, Chen teaches:
further comprising providing, by the machine learning engine, access to at least one output selected from the group consisting of the first output and the second output ([Chen, col 5, line 44-54] The output of the controller, which contains the machine learning models, is being accessed and the difference between a target output and a current output is calculated)
Claim 9 is a computer readable medium claim which implements the same features as the method claim 4, and is rejected for at least the same reasons.
Regarding claim 5, Chen teaches:
The method of claim 1, further comprising directing, by the machine learning engine, the second machine learning model to determine a residual of the first output ([Chen, col 5, line 44-54] The output of the controller, which contains the machine learning models, is being accessed and the difference between a target output and a current output is calculated).
Claim 10 is a computer readable medium claim which implements the same features as the method claim 5, and is rejected for at least the same reasons.
Regarding claim 11, Chen in view of Amini teaches:
A machine learning system, comprising: a user interface, configured to select: a user-specified data set; and a user-specified task; ([Amini, 0068] A user may specify the type of scene a camera is intended to capture (i.e., the user-specified task) and input information to the system indicating where the camera is installed and input indicative of deployment characteristics of the camera system (i.e., the user specified data set), and the user may also update entered information by providing new inputs via a similar interface (i.e., the user interface) during operation of the camera system. [Amini, 0070] The received encoded data are provided to the machine learning techniques such as deep learning and neural networks to detect physical objects in the captured video)
Claim 11 is a machine learning system claim which implements the same features as the method claim 1, and is rejected for at least the same reasons.
Claim 12 is a machine learning system claim which implements the same features as the method claim 2, and is rejected for at least the same reasons.
Claim 13 is a machine learning system claim which implements the same features as the method claim 3, and is rejected for at least the same reasons.
Claim 14 is a machine learning system claim which implements the same features as the method claim 4, and is rejected for at least the same reasons.
Claim 15 is a machine learning system claim which implements the same features as the method claim 5, and is rejected for at least the same reasons.
Response to Arguments
Applicant's arguments filed 07/17/2025 have been fully considered but they are not persuasive.
Response to Arguments under 35 U.S.C. 101
Arguments: Applicant asserts that the dynamic generation of multiple machine learning models with coordinated encoder selection and parallel execution cannot practically be performed in the human mind due to the computational complexity. Applicant asserts that MPEP has stated that a system directed to making specific classifications/determinations based on details derived from data gathering specific to a technical environment or claim that involves a several-step manipulation of data cannot be practically performed in the human mind [Remarks, page 2]. Applicant further asserts that the claim utilizes the encodings to facilitate generation of multiple task-personalized machine learning models, used to respond to a user-specified task in parallel [Remarks, page 3].
Examiner’s Response: Examiner respectfully disagrees. First, the applicant correctly pointed out that the limitations of “selecting encoders based on analyzed characteristics of tasks/data” is abstract. Second, the limitations of “generating, by the machine learning engine, a first machine learning model for processing the user-specified data set, wherein generating the first machine learning model is based upon the at least one characteristic of the user-specified data set and the at least one characteristic of the user-specified task; training, by the machine learning engine, the first machine learning model; directing, by the machine learning engine, the first machine learning model to generate a first output by processing at least one encoding of the user-specified data set; generating, by the machine learning engine, a second machine learning model based upon the at least one characteristic of the user- specified data set and the at least one characteristic of the user-specified task, responsive to receiving the user-specified data set and the user-specified task, subsequent to training the first machine learning model and during execution of the first machine learning model; and directing, by the machine learning engine, the second machine learning model to generate at least a second output by processing the at least one encoding of the user- specified data set” are directed to mere instructions to apply an exception using a generic computer component MPEP 2106.05(f), as the first machine learning model and the second machine learning model are generic computer components that performs the abstract ideas of “to generate a first output by processing at least one encoding of the user-specified data set” and “to generate at least a second output by processing the at least one encoding of the user- specified data set.” The limitations of ‘generating a first output by processing the user-specified data set’ and ‘generating a second output by processing the user-specified data set’ does not require a computer component and can be performed with the aid of pen and paper as generating the output by processing the user-specified data involves selecting data and making inferences. The parallel processing of user-specified data and tasks using the trained machine learning models are directed to mere instructions to apply an exception using a generic computer component.
Accordingly, arguments to independent claims 1 and 6 are not persuasive. Similarly, arguments to dependent claims 2-5, 7-10, and 12-15 which depend from claims 1, 6 and 11 are not persuasive.
Arguments: Applicant asserts that (a) the Federal Circuit has determined that limitations that “detail how an action is achieved in several steps” and “refer to specific technological features functioning together to provide … granular, nuanced, and useful classification” would not qualify as mere instructions, (b) Example 39 is analogous to the claimed invention, and like Example 39, the invention involves a series of steps for training machine learning models using particular data inputs and processing steps [Remarks, page 4-5].
Examiner’s Response: Examiner respectfully disagrees. Regarding (a), according to MPEP 2106.05(a), examples that the courts have indicated may not be sufficient to show an improvement to technology include: iii. Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48. Similar to TLI Communications, the instant application discloses gathering the user-specified data and user-specified tasks, analyzing information using conventional techniques including the first machine learning model and the second machine learning model. Therefore, the claim limitations are not sufficient to show an improvement to technology.
Regarding (b), Example 39 and the instant application are distinguishable. Example 39 is eligible because the claim does not recite any of the judicial exceptions enumerated in the 2019 PEG. For instance, the claim does not recite any mathematical relationships, formulas, or calculations. The claimed invention explains a method of processing user-specified data set using selected encoders, and generating multiple machine learning models using the generated encodings. As discussed above, the limitations of ‘processing user-specified data set using selected encoders, and generating multiple machine learning models using the generated encodings’ are judicial exceptions. Therefore, the claimed invention does not integrate the abstract idea into a practical application.
Accordingly, arguments to independent claims 1 and 6 are not persuasive. Similarly, arguments to dependent claims 2-5, 7-10, and 12-15 which depend from claims 1, 6 and 11 are not persuasive.
Response to Arguments under 35 U.S.C. 103
Applicant’s arguments with respect to claims 1 and 6 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US-20200356900-A1 (This prior art is pertinent as it discloses generating machine learning models concurrently)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JUN KWON/Examiner, Art Unit 2127
/ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127