Prosecution Insights
Last updated: April 19, 2026
Application No. 17/338,994

CONTINUOUS OPTIMIZATION OF HUMAN-ALGORITHM COLLABORATION PERFORMANCE

Final Rejection §101§112
Filed
Jun 04, 2021
Examiner
KNIGHT, PAUL M
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
79%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
169 granted / 272 resolved
+7.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
35.2%
-4.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 272 resolved cases

Office Action

§101 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Style In this action unitalicized bold is used for claim language, while italicized bold is used for emphasis. Applicant Reply “The claims may be amended by canceling particular claims, by presenting new claims, or by rewriting particular claims as indicated in 37 CFR 1.121(c). The requirements of 37 CFR 1.111(b) must be complied with by pointing out the specific distinctions believed to render the claims patentable over the references in presenting arguments in support of new claims and amendments. . . . The prompt development of a clear issue requires that the replies of the applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. . . . An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” MPEP § 714.02. Generic statements or listing of numerous paragraphs do not “specifically point out the support for” claim amendments. “With respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) (citing MPEP § 2163.04 which provides that a ‘simple statement such as ‘applicant has not pointed out where the new (or amended) claim is supported, nor does there appear to be a written description of the claim limitation ‘___’ in the application as filed’ may be sufficient where the claim is a new or amended claim, the support for the limitation is not apparent, and applicant has not pointed out where the limitation is supported.’)” MPEP § 2163(II)(A). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) and the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? All claims are found to be directed to one of the four statutory categories, unless otherwise indicated in this action. With respect to claims 16-20, see Spec. ¶11 defining “computer readable storage medium.” Step 2A Prongs One and Two (Alice Step 1): The claims as a whole are directed to the abstract ideas recited in the claims. They do not recite additional elements that integrate the judicial exception into a practical application.1 To confer patent eligibility to an otherwise abstract idea, claims may recite a specific means or method of solving a specific problem in a technological field.2 Findings that a computer component is generic apply to all instances of the recited component in the claims. Independent Claims: Claim 1 recites: A method comprising: computing, by a system operatively coupled to a processor and comprising a graphical user interface, (This claim language is a mere instruction to implement the abstract ideas recited below, on ordinary computer components. This finding applies to all subsequent recitations of these components.) a set of thresholds corresponding to a set of decision performances relative to a set of classifier confidence scores, wherein the set of decision performances comprise a set of user decision performances, a set of classifier decision performances, and a set of augmented decision performances; (“Computing” thresholds between “decision performances” and classifier confidence scores reads both on a mental process implemented on ordinary computing components (i.e. comparing correctness of a decision with the confidence associated with the decision), and on the mathematical operation of “computing” the set of thresholds. This remains true whether the “decision performance” originates from a user, a “classifier,” or a combination of both.) selecting, by the system, one of a plurality of collaboration levels based on comparing the set of thresholds to a new confidence score of a new decision wherein the selecting comprises: performing multiple different evaluations between the new confidence score and different ones of thresholds within the set of thresholds: and choosing between, and based on the performing the multiple different evaluations: (The claimed invention is directed to a mental process of selecting. Specifically, the claims are directed to the mental process of selecting less assertive ways of collaborating to make a decision, as the confidence in one’s opinion decreases. In other words, this reads on the mental process of offering opinions more freely when one is more confident in the correctness of the opinion. The claims recite selecting between several levels of assertiveness associated with different confidence scores. At the highest confidence score, the system implements the mental process of selecting to output a decision without feedback, effectively ignoring other opinions when the system is so confident it need not ask for help making a decision. At the next highest confidence level, the system implements the mental process of selecting to communicate the opinion, thereby requesting feedback to verify an opinion. At the third confidence level, the system implements the mental process of selecting to ask whether communication of the opinion is desired via a prompt. This reads on the mental process of determining to ask whether someone else wants an opinion, when the person offering the opinion is unsure of the opinion. The collaboration levels are recited below.) outputting a classifier decision as the final decision or selecting between different options comprising: electronically presenting, via the graphical user interface, case and classifier recommendation to the user external to the system and collaborating, by the system, with the user and receiving and processing, via the graphical user interface of the system, signals indicative of electronic feedback processed by the graphical user interface and input from a user external to the system based on the case and the classifier recommendation electronically presented via the graphical user interface; or electronically presenting, via the graphical user interface, the case with optional access to classifier recommendation to the user external to the system, and electronically presenting, via the graphical user interface, classification information to user if user requests; and receiving and processing, via the graphical user interface of the system, signals indicative of electronic feedback processed by the graphical user interface and input from a user external to the system. (The above recites the collaborations levels found to be part of the mental process in the rejection above and merely provides instructions to implement the abstract ideas using conventional computer components. The use of a GUI for communicating with a user is mere extra-solution activity, again, implemented using generic computer components.) Independent claim 9 recites a device implementing the method of claim 1. Therefore, the same analysis applies. In addition, claim 9 recites “A system comprising: one or more processors; a memory coupled to at least one of the one or more processors; a graphical user interface coupled to at least one of the one or more processors; and a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of: (This is a mere instruction to apply the subsequently claimed abstract ideas using ordinary computer components, functioning in their ordinary capacities.) Independent claim 16 recites a product implementing the method of claim 1. Therefore, the same analysis applies. In addition, claim 16 recites “A computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an system, causes the system to perform actions comprising[.]” (This is a mere instruction to apply an exception by merely using ordinary computer parts, functioning their ordinary capacity.) Independent claim 16 further recites “wherein the selecting comprises: choosing between performing a first evaluation comprising outputting a classifier decision as the final decision if the new confidence score of the new decision is greater than a first threshold of the set of thresholds; performing a second evaluation comprising presenting case and classifier recommendation to the user external to the information handling system if the new confidence score of the new decision was not greater than the first threshold and the new confidence score is greater than the second threshold; performing a third evaluation comprising presenting case with optional access to classifier recommendation to the user external to the information handling system if the new confidence score of the new decision was not greater than the first threshold or the second threshold and the new confidence score is greater than the third threshold; and electronically presenting, via a graphical user interface, signals indicative of electronic feedback processed by the graphical user interface and input from a user external to the system based on the case without access to classifier recommendation if the new confidence score is not greater than the first threshold, the second threshold or the third threshold[.]” Selecting/choosing between the four recited collaboration levels reads on a mental process. See rejection of claim 1. With respect “electronically presenting signals indicative of electronic feedback processed by the graphical user interface and input from the user . . . without access to classifier recommendation if the new confidence score is not greater than [any of the three thresholds]” this merely recites the mental process of omitting the offering of recommendation and taking the recommendation of another, when one has low confidence regarding a decision. The electronically implemented operations read on mere instructions to implement the process using generic computer components. The language reciting the user as “external to the information handling system” indicates data input/output. But this reads on using ordinary computer components to all the information handling system to communicate with the user, which is merely extra-solution activity. Independent claim 16 further recites: “collaborating with a user external to the information handling system at the selected collaboration level to generate a final decision.” (See rejection of claim 1. Note also that “to generate a final decision” is written as an intended use. See MPEP §§ 2103, 2111.02, and 2111.04.) Dependent Claims: 2. The method of claim 1 further comprising: generating by the system, the user performance plot of a user based on the set of user decision performances, wherein the generating further comprises: (This claim appears to be directed to drawing or otherwise creating figure 5. The claimed “generating a user performance plot” reads on a way of organizing and manipulating information through mathematical correlations. This has been found to be an abstract idea. See MPEP § 2106.04(a)(2) citing Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344 (Fed. Cir. 2014).) analyzing, by the system, a plurality of outcomes of a plurality of previous decisions made by the user; assigning, by the system, a set of user performance values, based on the analysis, to a plurality of intervals of the set of classifier confidence scores, wherein the set of user performance values reflect a success of the plurality of outcomes of the plurality of previous decisions; (Analyzing outcomes of decisions and assigning a set of performance values that reflect the success of previous decisions, based on the analysis, reads on both a mental process and on a mathematical operation (e.g. determining the correct proportion.)) and generating, by the system, the user performance plot based on the plurality of performance values at the plurality of intervals of the set of classifier confidence scores, (The claimed “generating” the plot based on the recited in formation is both a mental process, and a mathematical operation consistent with MPEP 2106.04(a)(2), cited above.) wherein the performing multiple different evaluations between the new confidence score and different ones of thresholds within the set of thresholds; and choosing between outputting a classifier decision as the final decision, presenting case and classifier recommendation to user, presenting case with optional access to classifier recommendation to user and presenting case to user without access to classifier recommendation based on the performing the multiple different evaluations comprises: (See rejection of claim 1.) performing a first evaluation comprising outputting classifier decision as the final decision if the new confidence score of the new decision is greater than a first threshold of the set of thresholds; performing a second evaluation comprising presenting case and classifier recommendation to the user if the new confidence score of the new decision was not greater than the first threshold and the new confidence score is greater than the second threshold; performing a third evaluation comprising presenting case with optional access to classifier recommendation to the user if the new confidence score of the new decision was not greater than the first threshold or the second threshold and the new confidence score is greater than the third threshold; and presenting case to user without access to classifier recommendation if the new confidence score is not greater than the first threshold, the second threshold or the third threshold. (The claimed limitations to performing of evaluations read on mental processes. See rejection of claim 1.) 3. The method of claim 2 further comprising: generating, by the system, a classifier performance plot of a classifier based on the set of classifier decision performances; (This claim appears to be directed to drawing or otherwise creating figure 5. Generating a “plot” reads on a way of organizing and manipulating information through mathematical correlations. See MPEP § 2106.04(a)(2) citing Digitech.) generating, by the system, an augmented performance plot based on the set of augmented decision performances, wherein the set of augmented decision performances are based on a collaboration between the user and the classifier; (Generating a “plot,” including the data used in the plot, reads on a way of organizing and manipulating information through mathematical correlations. See MPEP § 2106.04(a)(2) citing Digitech.) and determining, by the system, the set of thresholds based on a set of intersections between the user performance plot, the classifier performance plot, and the augmented performance plot. (Determining thresholds (e.g. finding the locations where the lines cross in figure 5) is a mental process.) 4. The method of claim 1 wherein at least one of the plurality of collaboration levels is selected from the group consisting of a user alone collaboration level, a classifier alone collaboration level, a classifier recommendation available collaboration level, and a classifier recommendation provided collaboration level, wherein the user alone collaboration level is associated with the final decision being based on collaboration between the system and the user only, wherein the classifier alone collaboration level is associated with the final decision being based on the system using the decision to generate the final decision without involvement from the user, wherein the classifier recommendation available collaboration level is associated with the final decision being based on the user being informed by the system that a recommendation is available if requested to make a decision, and wherein the classifier recommendation provided collaboration level is associated with the final decision being based on the system automatically providing the recommendation from the classifier to the user. (This is written as a Markush group. See rejection of claim 1, noting that this reads on a “user alone” generating “final decision.”) 5. The method of claim 1, wherein the set of user decision performances are based on a decision accuracy of a user, the set of classifier decision performances are based in a decision accuracy of a classifier, and the set of augmented decision performances are based on a decision accuracy of the user with assistance from the classifier. (Computing thresholds (per claim 1) based on “decision accuracy” of a user, classifier, and the combination of a user and a classifier reads on both a mental process and a mathematical operation (ranking).) 6. The method of claim 1 further comprising: determining, by the system, at least one of the set of crossover points as an automation bias crossover point, wherein the automation bias crossover point indicates a point at which the user performs better without a classifier recommendation. (The claimed “determining” that a crossover point is an automation bias crossover point is a mental process.) 7. The method of claim 1 wherein determining the new confidence score for the new decision further comprises: inputting, by the system a new question into a classifier configured to issue the new decision and the new confidence score, wherein a degree of confidence of the new decision corresponds to the new confidence score. (Determining a new confidence score based on new information is a mental process.) 8. The method of claim 1, further comprising: receiving, by the system, a new set of decisions from a user in response to providing a new set of questions to the user; and re-computing, by the system, the set of thresholds based on the new set of decisions. (Recomputing the thresholds based on new data is a mental process. The receiving of decisions is merely extra-solution activity. Note that inputting and outputting of data has been found to be WURC. See MPEP § 2106.05(d)(II).) For rejections of claims 10-15, see rejections of claims 2-7, respectively. For rejections of claims 17-19 and 20, see rejections of claims 2-4 and 7, respectively. Step 2B (Alice Step 2): The rejected claims do not recite additional elements that amount to significantly more than the judicial exception. All additional limitations that do not integrate the claimed judicial exception into a practical application also fail to amount to significantly more for the reasons given at step 2A2. All limitations found to read on mere data gathering, storage, and data outputting at step 2A2 are WURC. This finding is based on cases which have recognized that generic input-output operations, repetitive processing operations, and storage operations are WURC.3 Other aspects of generic computing have also been found to be WURC.4 Further, the description itself may provide support for a finding that claim elements are WURC. The analysis under § 112(a) as to whether a claim element is “so well-known that it need not be described in detail in the patent specification” is the same as the analysis as to whether the claim element is widely prevalent or in common use.5 Similarly, generic descriptions in the Specification of claimed components and features has been found to support a conclusion that the claimed components were conventional.6 Improvements to the relevant technology may support a finding that the claims include a patent eligible inventive concept. But some mechanism that results in any asserted improvements must be recited in the claim, and the Specification must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing the improvement.7 All dependent claims are rejected as containing the material of the claims from which they depend. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. At the outset it is noted that separately listed claim elements are construed as distinct components, that all claim terms must be given weight, there is presumed to be a difference in meaning and scope when different words or phrases are used in separate claims, and repeated and consistent descriptions in the specification indicate the proper scope of a claimed term. “[C]laims must ‘conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description.’ 37 C.F.R. § 1.75(d)(1).” Phillips v. AWH Corp., 415 F.3d 1303, 1316 (Fed. Cir. 2005) (as cited in MPEP § 2111). Therefore, use of two different terms in the claims that both rely on the description of a single structure in the Specification may render at least one term indefinite because there is no way to determine which term should be construed in view of the description of the single structure. All independent claims substantially recite “computing, by a system operatively coupled to a processor . . . , a set of thresholds corresponding to a set of decision performances relative to a set of classifier confidence scores[.]” It is not clear whether the “set” of confidence scores refers to positive confidence (i.e. confidence that a condition exists) and the related negative confidence (i.e. confidence that a condition does not exist). For example, Figure 4 shows two crossover points at 30% and 70% confidence. This creates confusion because both a 30% “confidence” that a condition does not exist and a 70% “confidence” that a condition does exist refer to what would ordinary be called 70% confidence that the classifier’s decision is correct. The Specification also explains the classifier being the least sure between 30% and 70% confidence. See Spec. ¶ 36. (“Classifier performance plot 410 shows that classifier 310's success rate is high when confidence of an answer is high (high probability that answer A is correct), and when confidence of an answer is low (low probability that answer A is correct, indicating that answer B is correct). Classifier performance plot 410 also shows that the success rate drops when the classifier is unsure of an answer (e.g., between 30%-70% confidence).”) This usage of “confidence,” where 0% of applicant’s “confidence” would indicate 100% confidence that there is a 0% chance of state A, seems to be inconsistent with the plain meaning of “confidence.” But the term is never expressly defined in the Specification, so one of ordinary skill in the art would not have any way of determining whether confidence is being used to refer to the classifier’s estimation that its output is correct, or the likelihood a given option is true. Since the Specification uses “confidence” in a way that is inconsistent with the plain meaning of the term, without expressly defining the term, it is unclear whether or not the claimed “thresholds” corresponding to the “set” of confidence scores and relative performances merely refers to both cases where the classifier is 70% confident of being correct (both in the negative or positive direction as illustrated by thresholds A and A’ in figure 7), or if the sets of thresholds require multiple different levels of confidence by the classifier (e.g. thresholds A, B, and C of figure 7 showing different levels of confidence that the classifier’s answer is correct.) All independent claims substantially recite “wherein the set of decision performances comprise a set of user decision performances, a set of classifier decision performances, and a set of augmented decision performances[.]” Dependent claims 5 and 13 purport to modify the scope of the claimed “performances” by reciting “wherein the set of user decision performances are based on a decision accuracy of a user, the set of classifier decision performances are based on a decision accuracy of a classifier, and the set of augmented decision performances are based on a decision accuracy of the user with assistance from the classifier[.]” There is presumed to be a difference in meaning and scope when different words or phrases are used in separate claims. The claimed “decision performances” are not terms of art, leaving the Specification as the only option for determining the plain meaning of the terms. But the Specification only explains the user/classifier/augmented decision performances as relating to accuracy. While the scope of claims 5 and 13 are clear because they expressly limit the terms consistent with the antecedent support in the Specification, the scope of the claimed “performances” recited in the independent claims must be differentiated from performances which are “based on a decision accuracy” recited in claims 5 and 13. Without any support for performance referring to anything other than accuracy in the Specification, there is no way to distinguish the scope of the “performances” in the independent claims from the “performances” modified by the language in claims 5 and 13. This leaves the scope of the recited “performances” unclear in the independent claims. All independent claims substantially recite “receiving and processing, via the graphical user interface of the system, signals indicative of electronic feedback processed by the graphical user interface and input from a user external to the system based on the case and the classifier recommendation electronically presented via the graphical user interface[.]” (Claim 16 recites “electronically presenting, via a graphical user interface, signals indicative of electronic feedback processed by the graphical user interface and input from a user external to the system[.]”) Different terms indicates separate claim elements. It is not clear how “signals indicative of electronic feedback” is different than “input from a user.” The input from a user reads on input to a computer. Input from a user to a computer would generally be understood by one of ordinary skill as a “signal.” Note that the claims do not recite a signal in response to input from a user. Rather, the input from the user and the signal are recited using different terms indicating they are separate claim elements. It is not clear whether the signal is in response to input by the user (e.g. a user types on a keyboard and a computer generates a signal in response) as would be more consistent with the claimed invention as a whole, or if the input from a user is evaluated by the system separately from the signal, as would be more consistent with the wording of the claims.) Claims 1 and 9 substantially recite “presenting . . . the case with optional access to classifier recommendation to the user . . . and electronically presenting . . . classification recommendation to user if user requests[.]” It is not clear whether the similar terms “classification recommendation” and the “classifier recommendation” refer to the same claim element, or to different claim elements. Further, it is not clear whether the terms are singular or plural because neither term is preceded by an article (consistent with plural) but neither terms end with an “s” (consistent with the singular form.) Claims 1 and 9 substantially recite “electronically presenting, via the graphical user interface, classification recommendation to user if user requests[.]” It is not clear of both instances of user are plural or singular because both omit an article, but are written in singular form. Further, it is not clear whether they refer to the same claim element (i.e. the same user) because the second claimed “user” appears to be in the singular form but is not preceded by a definite article (i.e. there is not “the” before the second “user.”) Claim 16 recites “performing a first evaluation comprising outputting classifier decision . . . performing a second evaluation comprising presenting . . . performing a third evaluation comprising presenting . . . presenting case to user[.]” It is not clear what is meant by an “evaluation” that comprises “outputting” or an “evaluation” that comprises “presenting.” Ordinarily the operations of evaluation and outputting/presenting would be interpreted as separate operations. But the claim language indicates that one operation is part of the other. Without any objective measure indicating when outputting/presenting would be part of an “evaluation,” the claim language is indefinite. Claim 16 omits articles before the first instance of the terms “case and classifier recommendation,” and before the optional access to “[a/the] classifier recommendation.” It is not clear if these are meant to be plural or merely omit antecedent basis. Claim 16 recites “a set of user decision performances” . . . “the user external to the information handling system” (twice) “input from a user external to the system” and “collaborating with a user external to the information handling system.” It is not clear whether “the system and “the information handling system” refer to the same system. It is also not clear whether the language “a user external to the system” refers to a different user than “the user external to the information handling system” or whether either of the aforementioned users refer to the same element as “a user external to the information handling system.” All dependent claims are rejected as containing the limitations of the claims from which they depend. Response to Arguments Applicant's arguments filed 09/03/2025 have been fully considered but they are not persuasive. Rejections under § 101: No specific arguments are found in the Applicant Remarks. Rejections under § 112(b): No specific arguments are found in the Applicant Remarks. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL M KNIGHT whose telephone number is (571)272-8646. The examiner can normally be reached Monday - Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on 571 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. PAUL M. KNIGHT Examiner Art Unit 2124 /PAUL M KNIGHT/Examiner, Art Unit 2124 1 Step 2A prongs one and two are evaluated individually, consistent with the framework in the MPEP. Evaluation of relationships between abstract ideas and additional elements in one location promotes clarity of the record. 2 “In short, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. . . . It should be noted that while this consideration is often referred to in an abbreviated manner as the ‘improvements consideration,’ the word ‘improvements’ in the context of this consideration is limited to improvements to the functioning of a computer or any other technology/technical field, whether in Step 2A Prong Two or in Step 2B.” MPEP 2106.04(d)(1). See also Koninklijke KPN N.V. v. Gemalto M2M GmbH, 942 F.3d 1143, 1150-1152 (Fed. Cir. 2019). 3 See MPEP § 2106.05(d)(II) listing operations including “receiving or transmitting data,” “storing and retrieving data in memory,” and “performing repetitive calculations” as WURC. 4 “But ‘[f]or the role of a computer in a computer-implemented invention to be deemed meaningful in the context of this analysis, it must involve more than performance of 'well-understood, routine, [and] conventional activities previously known to the industry.’ Content Extraction, 776 F.3d at 1347-48 (quoting Alice, 134 S. Ct at 2359). Here, the server simply receives data, ‘extract[s] classification information . . . from the received data,’ and ‘stor[es] the digital images . . . taking into consideration the classification information.’ See ‘295 patent, col. 10 ll. 1-17 (Claim 17). . . . These steps fall squarely within our precedent finding generic computer components insufficient to add an inventive concept to an otherwise abstract idea. Alice, 134 S. Ct. at 2360 (‘Nearly every computer will include a 'communications controller' and a 'data storage unit' capable of performing the basic calculation, storage, and transmission functions required by the method claims.’); Content Extraction, 776 F.3d at 1345, 1348 (‘storing information’ into memory, and using a computer to ‘translate the shapes on a physical page into typeface characters,’ insufficient confer patent eligibility); Mortg. Grader, 811 F.3d at 1324-25 (generic computer components such as an ‘interface,’ ‘network,’ and ‘database,’ fail to satisfy the inventive concept requirement); Intellectual Ventures I, 792 F.3d at 1368 (a ‘database’ and ‘a communication medium’ ‘are all generic computer elements’); BuySAFE v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) (‘That a computer receives and sends the information over a network—with no further specification—is not even arguably inventive.’).” TLI Commc'ns LLC v. AV Auto., LLC, 823 F.3d 607, 614 (Fed. Cir. 2016), Emphasis Added. 5 “The analysis as to whether an element (or combination of elements) is widely prevalent or in common use is the same as the analysis under 35 U.S.C. 112(a) as to whether an element is so well-known that it need not be described in detail in the patent specification. See Genetic Techs. Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (supporting the position that amplification was well-understood, routine, conventional for purposes of subject matter eligibility by observing that the patentee expressly argued during prosecution of the application that amplification was a technique readily practiced by those skilled in the art to overcome the rejection of the claim under 35 U.S.C. 112, first paragraph)[.]” MPEP § 2106.05(d)(I). 6 “Similarly, claim elements or combinations of claim elements that are routine, conventional or well-understood cannot transform the claims. (Citing BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1290-1291 (Fed. Cir. 2018)). When the patent's specification ‘describes the components and features listed in the claims generically,’ it ‘support[s] the conclusion that these components and features are conventional.’ Weisner v. Google LLC, 51 F.4th 1073, 1083-84 (Fed. Cir. 2022); see also Beteiro, LLC v. DraftKings Inc., 104 F.4th 1350, 1357-58 (Fed. Cir. 2024).” Broadband iTV, Inc. v. Amazon.com, Inc., 113 F.4th 1359 (Fed. Cir. 2024) 7 “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.” MPEP § 2106.05(a).
Read full office action

Prosecution Timeline

Jun 04, 2021
Application Filed
Aug 24, 2024
Non-Final Rejection — §101, §112
Nov 27, 2024
Response Filed
Dec 04, 2024
Examiner Interview (Telephonic)
Dec 04, 2024
Examiner Interview Summary
Feb 06, 2025
Final Rejection — §101, §112
Apr 03, 2025
Applicant Interview (Telephonic)
Apr 03, 2025
Examiner Interview Summary
May 15, 2025
Request for Continued Examination
May 21, 2025
Response after Non-Final Action
May 30, 2025
Non-Final Rejection — §101, §112
Sep 03, 2025
Response Filed
Oct 14, 2025
Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530592
NON-LINEAR LATENT FILTER TECHNIQUES FOR IMAGE EDITING
2y 5m to grant Granted Jan 20, 2026
Patent 12530612
METHODS FOR ALLOCATING LOGICAL QUBITS OF A QUANTUM ALGORITHM IN A QUANTUM PROCESSOR
2y 5m to grant Granted Jan 20, 2026
Patent 12499348
READ THRESHOLD PREDICTION IN MEMORY DEVICES USING DEEP NEURAL NETWORKS
2y 5m to grant Granted Dec 16, 2025
Patent 12462201
DYNAMICALLY OPTIMIZING DECISION TREE INFERENCES
2y 5m to grant Granted Nov 04, 2025
Patent 12456057
METHODS FOR BUILDING A DEEP LATENT FEATURE EXTRACTOR FOR INDUSTRIAL SENSOR DATA
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
79%
With Interview (+17.0%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 272 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month