Prosecution Insights
Last updated: April 19, 2026
Application No. 17/311,730

CONTENT CLASSIFICATION METHOD AND CLASSIFICATION MODEL GENERATION METHOD

Final Rejection §103§112
Filed
Jun 08, 2021
Examiner
GODO, MORIAM MOSUNMOLA
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Semiconductor Energy Laboratory Co. Ltd.
OA Round
4 (Final)
44%
Grant Probability
Moderate
5-6
OA Rounds
4y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
30 granted / 68 resolved
-10.9% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
47 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. This office action is in response to the Application No. 17311730 filed on 11/20/2025. Claims 2-5 and 10-19 has been cancelled and claims 1, 6-9 are presented for examination and are currently pending. Response to Arguments 2. Applicant’s arguments about the new limitation “and wherein the first model and the second model are configured to include the change over time by the updated first feature and the updated second feature” have been considered but are moot because a new secondary reference has been applied. On page 1 of the remarks, the Applicant argued that “the Office Action fails to establish a prima facie case of obviousness in view of Beers and Adibowo with respect to claim 1, and the rejection of claim 1, and dependent claims 7 and 8, should be withdrawn”. It is noted that the above argument has been considered but are moot because Beers in view of Adibowo in view of Polatkan has now been applied in light of the amendments. On pages 1-2 of the remarks, the Applicant argued that “Claim 6 has been rejected under 35 U.S.C. §103 as being unpatentable over Beers in view of Luo (U.S. Pat. App. Pub. No. 2018/0197087) and Adibowo, and claim 9 has been rejected under 35 U.S.C. §103 as being unpatentable over Beers in view of Dwane (U.S. Pat. App. Pub. No. 2019/0208056) and Adibowo. Applicant requests reconsideration and withdrawal of these rejections because Luo, which is cited for showing retraining of a classification model, and Dwane, which is cited for showing case management, do not remedy the failure of Beers and Adibowo to describe or suggest the subject matter of claim 1, upon which these claims depend. All claims are in condition for allowance”. It is noted that the above argument has been considered but are moot because. Beers in view of Dwane in view of Adibowo and further in view of Polatkan has now been applied in light of the amendments Furthermore, the claims are not in condition for allowance in light of the obviousness rejection detailed in the office action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claims 1, 6-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “wherein the first model and the second model are configured to include the change over time …”. It is not clear if first classification models and a second classification model previously recited in lines 2 and 4 is the same as the newly added limitation of the first model and the second model. For the purpose of examination, the Examiner has interpreted the first model and the second model to mean the same as first classification models and a second classification model. Claims 6-9 that are not specifically mentioned are rejected due to dependency. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Beers et al. (US20150206069) in view of Adibowo (US20190279073) and further in view of Polatkan et al. (US20200005047) Regarding claim 1, Beers teaches a content classification method (The output of the classification-training step is a value that indicates how well that collection of features performs on the training set of patent data [0050]; a method, and means for implementing the method according to the present disclosure employs a set of algorithms based on training data [0026]. The Examiner notes that contents are patents) comprising the steps of: generating a plurality of first classification models by machine learning (At S4, a heuristic search method, such as ANN, is used to generate a first set of binary classifiers [0065]) using a plurality of learning contents (… a database of patent information, including granted patents and patents applications in addition to other relevant patent data, including aggregate data for patent examination, grant, opposition, abandonment, annuity/maintenance fee payment, and the like [0026]. The Examiner notes learning contents are from the database) each provided with a first feature and a learning label (The system computes the model by first computing a set of features from the electronic patent data stored in the database. The features fall into two categories. The first category is the raw factors on a patent basis from Table 1 [0042]. The Examiner notes that claim type in Table 1 is a first feature labelled as a raw factor); generating a second classification model with the use of the plurality of first classification models (a binary classifier optimizer configured to generate, using an automated processor, a candidate set of binary classifiers from the list of binary classifiers using a heuristic search and to generate, using the automated processor, a final set of binary classifiers by maximizing iteratively a yield according to a cost function [0009]. The Examiner notes the candidate set of binary classifiers as the plurality of first classification models and the second classification model is one of the final set of binary classifiers); providing judgment data for a plurality of contents (For example, judged patent information receiver 43 may receive identifying and other detailed information about a patent of interest or a target patent document to be evaluated [0062]) each provided with a second feature with the use of the second classification model (wherein the device may be configured to test a validity of the final set of binary classifiers using the second set of patent data [0014]) and performing display on a graphical user interface (a user interface receiving the patent information for the target patent; … the user interface providing to a user a signal representing the estimate of patent quality [0016]; The computer system or systems that enable the user to interact with content or features can include a GUI (Graphical User Interface) [0071]), and wherein the first feature and the second feature comprise metadata of patent number comprising at least one of a state of a family, an application type, and a number of abandoned applications in a family (The system maintains a database of raw patent factors that are derived from the patent publication [0033]; Claim type “A” refers to an apparatus claim, claim type “S” to a system claim, claim type “C” to a claim for a compound, and claim type “M” refers to a method claim [0043]. The Examiner notes the apparatus claim tells us the application type), and wherein the learning label comprises an information of the patent number is abandoned (In Table 2, legal status code refers to events during the lifetime of the patent. These include office actions, change of ownership, abandonment, maintenance and expiration [0043]). Beers does not explicitly teach updating the first feature and the second feature based on a change over time predicted by a user, Adibowo teaches updating the first feature and the second feature (Patent analysis system 210 may update a value associated with the idea disclosure object indicating whether an application has been published, granted, abandoned [0068]. The Examiner notes the first feature is whether a patent application is granted and a second feature is whether an application is abandoned) based on a change over time (Patent analysis system 110, however, may update the patentability model stored in model database 160 twice per day [0030]) predicted by a user (users may formulate idea disclosures and/or submit files containing idea disclosures to … patent analysis system 110 [0024]), It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Beers to incorporate the teachings of Adibowo for the benefit of machine learning, artificial intelligence, and/or neural networks to analyze the body of publicly available information and/or to track the patentability of applications (Adibowo [0022]) Modified Beers does not explicitly teach wherein the first model and the second model are configured to include the change over time by the updated first feature and the updated second feature. Polatkan teaches wherein the first model and the second model (Another reason for constructing different standalone classification models is because different types of content items may have different structure and formatting and/or different type/variety of content [0068]; “Type” may refer to the type of content, such as text, audio, or video, or to a file type. For example, one standalone classification model may be constructed for textual content items (or content items whose primary content is text), … and another standalone classification may be constructed for visual content items (or content items whose primary content is visual [0065]) are configured to include the change over time by the updated first feature (a new machine-learned model is generated regularly, such as every month, week, or other time period. Thus, the new machine-learned model may replace a previous machine-learned model. Newly acquired or changed training data may be used to update the model. For example, additional training data may be added to the model in order to produce a better prediction of standalone classification. As another example, the model may be updated if feature values of the existing training data have been changed. For example, co-viewing features of video items indicated in the training data may have changed [0040]; co-viewing features may indicate that the particular video was viewed in combination with another video by a user within a predefined time-frame or for an arbitrary duration [0046]. The Examiner notes co-viewing features is the updated first feature, change over time is co-viewing the video within a predefined time-frame or for an arbitrary duration, and the new machine-learned model that replaces a previous machine-learned model is the first model) and the updated second feature (In a related embodiment, an extent to which attributes and values have changed for one or more video items in training data is determined. If, for example, the number of times that each of a certain percentage of the video items (e.g., 15%) in the training data have been viewed since the last training exceeds a particular threshold (e.g., 20 times), then a new model is trained based on updated feature values for each of the video items indicated in the training data [0040]. The Examiner note number of times that each of a certain percentage of the video items (e.g., 15%) in the training data have been viewed is the updated second feature, change over time number of times that each of a certain percentage of the video items (e.g., 15%) in the training data have been viewed since the last training, and the new model trained is the second model) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Beers to incorporate the teachings of Polatkan for the benefit of machine learning technique used to generate a statistical or classification model that is trained based on a history of attribute values associated with metadata, content items, and other data [0038] provide improvements to optimization of classification ([0074]) Regarding claim 7, Modified Beers teaches the content classification method according to claim 1, Beers teaches wherein features provided for the learning contents and the contents are management parameters assigned to patent numbers (The system then proceeds to compute the input features to the classifier using the raw factors from the patent record … A list of raw factors can be found in Table 1 [0041]; The system computes the model by first computing a set of features from the electronic patent data stored in the database. The features fall into two categories. The first category is the raw factors on a patent basis from Table 1. The second are features that are computed over multiple records of patent data (i.e., over the entire set or over a subset). A list of the features considered when training the model is listed in Table 2 [0042]. The Examiner notes the raw factors are management parameters assigned to patent numbers). Regarding claim 8, Modified Beers teaches the content classification method according to claim 1, Beers teaches the wherein the judgment data includes a classification label or a score (The final score is computed using the trained classifier and then saved with the patent record [0041]). 5. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Beers et al. (US20150206069) in view of Luo et al. (US20180197087) in view of Adibowo (US20190279073) and further in view of Polatkan et al. (US20200005047) Regarding claim 6, Modified Beers teaches the content classification method according to claim 1, Beers teaches further comprising providing the learning contents with classification data (At S12, the final set of binary classifiers is reported or outputted. This set of binary classifiers to be used or validated and tested may be reported (S13) [0066]; At S14, a validation patent data set may be received … At S15, the validation patent data is used to validate the final set of binary classifiers. [0067]); and selecting a content having the judgment data (The training and cross-validation sets are both used to select parameters in the model [0045]; The binary classifiers are trained using supervised machine learning with three sets of data: training set, cross-validation set … [0037]) which is the same as the classification data from the plurality of contents (The final output of each classifier is combined into a final score) and displaying the content having the judgment data on the graphical user interface (a user interface receiving the patent information for the target patent; … the user interface providing to a user a signal representing the estimate of patent quality [0016]; The computer system or systems that enable the user to interact with content or features can include a GUI (Graphical User Interface) [0071]). Modified Beers does not explicitly teach the plurality of contents which are provided with classification labels with the use of an output of the second classification model Luo teaches which is the same as the classification data from the plurality of contents (For example, selected top features can be based on respective feature sets that include top text/word features (i.e., content data) that contribute to certain security classifications [0064]) which are provided with classification labels with the use of an output of the second classification model (Server 202 includes multiple logic features that can be used to iteratively generate updated security classification models. The classification models can be used by one or more users to classify and/or generate security labels for electronic data items created and stored within an example computer network [0050]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Beers to incorporate the teachings of Luo for the benefit of utilizing machine learning logic to generate a first/initial current classification model for determining a … classification or label for electronic data items such as digital/electronic documents (Luo [0020]) 6. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Beers et al. (US20150206069) in view of Dwane et al. (US20190208056 filed 01/04/2018) in view of Adibowo (US20190279073) and further in view of Polatkan et al. (US20200005047) Regarding claim 9, Modified Beers teaches the content classification method according to claim 8, further comprising Beers teaches designating, by the graphical user interface (a user interface receiving the patent information for the target patent; … the user interface providing to a user a signal representing the estimate of patent quality [0016]; The computer system or systems that enable the user to interact with content or features can include a GUI (Graphical User Interface) [0071]), and displaying a corresponding content in a list form (A list of raw factors can be found in Table 1 [0041]; The system computes the model by first computing a set of features from the electronic patent data stored in the database. The features fall into two categories. The first category is the raw factors on a patent basis from Table 1. The second are features that are computed over multiple records of patent data (i.e., over the entire set or over a subset). A list of the features considered when training the model is listed in Table 2 [0042]). Modified Beers does not explicitly teach a particular numerical range of the score Dwane teaches designates a particular numerical range of the score and displays a corresponding content in a list form (The customer effort variable (SetllaRating) represents a customer effort score. In certain embodiments, the customer service score is based on a numerical rating such as a 1-5 rating [0044]; a user device 204 refers to an information handling system such as a personal computer, … the user device is configured to present an estimation user interface 240 [0030]; A web page is a document which is accessible via a browser which displays the web page via a display device of an information handling system [0034]. The Examiner notes the numerical rating such as a 1-5 rating is displayed in a list form) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Beers to incorporate the teachings of Dwane for the benefit of evaluating machine learning operations (Dwane [0062]) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jun 08, 2021
Application Filed
Aug 24, 2024
Non-Final Rejection — §103, §112
Dec 04, 2024
Response Filed
Mar 07, 2025
Final Rejection — §103, §112
Jun 20, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Jul 12, 2025
Non-Final Rejection — §103, §112
Nov 20, 2025
Response Filed
Feb 09, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602586
SUPERVISORY NEURON FOR CONTINUOUSLY ADAPTIVE NEURAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12530583
VOLUME PRESERVING ARTIFICIAL NEURAL NETWORK AND SYSTEM AND METHOD FOR BUILDING A VOLUME PRESERVING TRAINABLE ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Jan 20, 2026
Patent 12511528
NEURAL NETWORK METHOD AND APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12367381
CHAINED NEURAL ENGINE WRITE-BACK ARCHITECTURE
2y 5m to grant Granted Jul 22, 2025
Patent 12314847
TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
44%
Grant Probability
78%
With Interview (+33.4%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month