DETAILED ACTION
Claims 1-20 are pending in the Instant Application.
Claims 1-20 are rejected (Final Rejection).
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6, 8, 9, 11-13, 17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zamora Esquivel et al. (“Zamora”), United States Patent Application Publication No. 2021/0110197, in view of Hoang et al. (“Hoang”), United States Patent Application Publication No. 2021/0406672.
As per claim 1, Zamora discloses a computing platform comprising- a processor; and memory storing computer-readable instructions that, when executed by the processor, cause the computing platform to:
receive, from a training database, a training dataset comprising a plurality of inputs for a neural network ([0035] wherein input training data is received to train the neural network as described);
categorize, using a clustering algorithm, the plurality of inputs into a plurality of groups, wherein each of the groups is characterized by a corresponding group identifier ([0086] wherein each input is classified into a cluster with a group identifier representing a user);
iteratively train the neural network, based on the plurality of inputs and a plurality of group identifiers, as determined using the clustering algorithm, associated with the plurality of inputs, the neural network ([0109] wherein an iterative training process is described for the clustering algorithm), wherein the training comprises:
providing an input of the plurality of inputs, to a plurality of input nodes of the neural network, generating, from a plurality of output nodes, an output based on the input, determining an error value based on the output, a group identifier associated with the input, and a loss function, and based on the error value, modifying one or more model parameters of the neural network ([0064] wherein the parameters are modified based on the loss function and error values until the model is stable as described in [0109]);
determine user interpretation of categorization as performed by the clustering algorithm, wherein the user interpretation comprises associations between the plurality of group identifiers, of the plurality of groups, and user-assigned labels ([0099] wherein the clustering algorithm performs the grouping and clustering and the user assigns the labels); but does not disclose to map, based on the associations between the plurality of group identifiers and the user-assigned labels, respective one or more output nodes, of the plurality of output nodes of the neural network, to a respective user-assigned label; and send, to a user computing device, the model parameters of the neural network and a mapping between the plurality of output nodes of the neural network and the user- assigned labels. However, Hoang teaches to map, based on the associations between the plurality of group identifiers and the user-assigned labels, respective one or more output nodes, of the plurality of output nodes of the neural network, to a respective user-assigned label ([0074] wherein a mapping is made between the user-assigned label and the group identifier (breed in the prior art)); and send, to a user computing device, the model parameters of the neural network and a mapping between the plurality of output nodes of the neural network and the user- assigned labels ([0075] wherein the user is sent for review, the model parameters (weights and probabilities in the prior art) and the output nodes and the user-assigned labels in the form of the results of labeling the breeds in the prior art).
Both Zamora and Hoang describe using a deep neural network. One could use the mapping and review element in Hoang with the clustering in Zamora to teach the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the method of training and updating a deep neural network with user labeling in Zamora with the user also used as feedback by providing mappings to the user as in Hoang in order to user the human brain to augment on a computationally intensive procedure.
As per claim 2, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1. Zamora further discloses wherein the neural network is for classifying an input as corresponding to one of the user-assigned labels ([0040] wherein the neural network is meant to classify the input data, and [0099] wherein the supervised clustering described, the labels are user-assigned).
As per claim 6, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1. Zamora further discloses wherein a dictionary data stores the associations between the plurality of group identifiers and the user-assigned labels ([0099] and [0110] wherein group identifiers such as image information and face information is associated with a user-assigned label such as a cell phone identifier and is stored in the system) .
As per claim 8, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1. Zamora further discloses wherein the plurality of inputs comprises computer-readable bit patterns corresponding to the images ([0116] wherein images are described as the inputs).
As per claim 9, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1. Zamora further discloses wherein the plurality of inputs comprises images and the user-assigned labels correspond to descriptions associated with the images ([0051] wherein the inputs are images placed in categories, wherein the categories can be labeled users as described in [0099]).
As per claim 11, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1. Zamora further discloses wherein the clustering algorithm comprises one or more of hierarchical clustering, centroid-based clustering, density-based clustering, or distribution-based clustering ([0045] Examiner notes the recitation of “one or more” in the claim language, requiring only one of the clustering algorithms and [0045] wherein a probability distribution model is described, which is a distribution based clustering).
As per claim 12, claim 12 is the method performed by the computing platform of claim 1 and is rejected for the same rationale and reasoning.
As per claim 13, claim 13 is the method performed by the computing platform of claim 2 and is rejected for the same rationale and reasoning.
As per claim 17, claim 17 is the method performed by the computing platform of claim 6 and is rejected for the same rationale and reasoning.
As per claim 19, claim 19 is the method performed by the computing platform of claim 8 and is rejected for the same rationale and reasoning.
As per claim 20, claim 20 is the program product that implements the system of claim 1 and is rejected for the same rationale and reasoning.
Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zamora in view of Huang, in further view of Papanikolopoulos et al. (“Papanikolopoulos”), United States Patent Application Publication No. 2019/0274257.
As per claim 3, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1, but does not disclose wherein the instructions, when executed by the processor, cause the computing platform to remove groups which comprise a number of inputs less than a threshold quantity. However, Papanikolopoulos teaches herein the instructions, when executed by the processor, cause the computing platform to remove groups which comprise a number of inputs less than a threshold quantity ([0105] wherein groups (clusters in the prior art) are removed if less than a threshold size).
Both Zamora and Papanikolopoulos describe grouping by machine learning model. One could incorporate minimum threshold quantity per group in Papanikolopoulos with the clusters in Zamora to teach the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the method of training a model by clustering and labeling in Zamora with the removal of groups under a certain size in Papanikolopoulos in order to reduce noise in the training data.
As per claim 14, claim 14 is the method performed by the computing platform of claim 3 and is rejected for the same rationale and reasoning.
Claims 4, 5, 7, 15, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zamora in view of Huang, in further view of Chen, United States Patent Application Publication No. 2021/02357680.
As per claim 4, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1, but does not teach wherein the instructions, when executed by the processor, cause the computing platform to not use, for the training, groups which comprise a number of inputs less than a threshold quantity. However, Chen teaches herein the instructions, when executed by the processor, cause the computing platform to not use, for the training, groups which comprise a number of inputs less than a threshold quantity ([0085] wherein a minimum threshold of a value for detection is defined).
Both Zamora and Chen describe training data. One could include the a limit on training data without a certain threshold number of inputs from Chen with the training data in Zamora to teach the claimed invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the method of training a clustering machine learning model as in Zamora with a quantity requirement for groups as in Chen in order to be able to customize the quality of the learning ability of the machine learning model based on the input type.
As per claim 5, note the rejection of claim 1 where Zamora, Huang and Chen are combined. The combination teaches computing platform of claim 4. Chen further teaches wherein the threshold quantity is a fixed fraction of a quantity of the plurality of inputs ([0085] wherein the quantity if fixed, and is a fraction as a portion of the whole).
As per claim 7, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1, but does not teach wherein the plurality of inputs comprises machine-scanned handwritten characters and the user-assigned labels comprise descriptions of the characters. However, Chen teaches wherein the plurality of inputs comprises machine-scanned handwritten characters and the user-assigned labels comprise descriptions of the characters ([0063] and [0138] wherein MNIST dataset consists of handwritten characters with user assigned labels). Zamora describes image training data but does not specifically describe using machine-scanned handwritten characters as training data. The Supreme Court in KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007) identified a number of rationales to support a conclusion of obviousness which are consistent with the proper “functional approach” to the determination of obviousness as laid down in Graham. One such rationale is (B) Simple substitution of one known element for another to obtain predictable results. In this case, one set of training data can be substituted for another and the results would be predictable since the method of training the model is not dependent on the training data itself, so the same procedure would be used no matter what the training data and the results would then be predictable.
As per claim 15, claim 15 is the method performed by the computing platform of claim 4 and is rejected for the same rationale and reasoning.
As per claim 16, claim 16 is the method performed by the computing platform of claim 5 and is rejected for the same rationale and reasoning.
As per claim 18, claim 18 is the method performed by the computing platform of claim 7 and is rejected for the same rationale and reasoning.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Zamora in view of Huang, in further view of Soljacic et al. (“Soljacic”), United States Patent Application Publication No. 2018/0260703.
As per claim 10, note the rejection of claim 1 where Zamora and Huang are combined. The combination teaches computing platform of claim 1, but does not teach wherein the loss function is one of:a mean squared error loss function, a binary cross-entropy loss function or a categorical cross-entry loss function. However, Soljacic teaches wherein the loss function is one of: a mean squared error loss function, a binary cross-entropy loss function or a categorical cross-entry loss function (EXAMINER NOTES the use of “is one of” indicating that only of the loss functions must be taught to teach the limitation and [0101] describes a means-squared error loss function).
Zamora discloses a loss function, but does not expressly describe one listed in the claim. However, Soljacic discloses one of the listed loss functions. The Supreme Court in KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007) identified a number of rationales to support a conclusion of obviousness which are consistent with the proper “functional approach” to the determination of obviousness as laid down in Graham. One such rationale is (B) Simple substitution of one known element for another to obtain predictable results. In this case, one loss function can be substituted for another and the results would be predictable because both evaluate how well the neural network is performing.
Response to Arguments
As per the rejections under 35 U.S.C. 101, Applicant’s arguments, see REMARKS, filed 20 October 2025, with respect to 35 U.S.C. § 101 have been fully considered and are persuasive. The claim limitations, as currently amended, put a limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. Therefore, the claims integrate the abstract idea into practical application. The rejections of claims 1-20 with respect to 35 U.S.C. § 101 have been withdrawn.
As per the prior art Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KANNAN SHANMUGASUNDARAM whose telephone number is (571)270-7763. The examiner can normally be reached M-F 9:00 AM -6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KANNAN SHANMUGASUNDARAM/Primary Examiner, Art Unit 2168