Prosecution Insights
Last updated: April 19, 2026
Application No. 18/647,841

DIFFERENTIAL HANDWRITING SKILL SCORE BASED MOTOR SKILL EVALUATION

Non-Final OA §101§102§103
Filed
Apr 26, 2024
Examiner
SHIMELES, BEZAWIT NOLAWI
Art Unit
2673
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
13 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/26/2024 is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because: Regarding independent claim 1 and its dependent claims 2-6, claim 1 is directed to a process (method), which falls within the four statutory categories. Claim 1 recites, in part: “first analyzing,… a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image; second analyzing,… the handwritten image,… the second analyzing resulting in a second text output corresponding to the handwritten image; and generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score.” The limitations as drafted above, are processes that, under broadest reasonable interpretation (BRI) cover the performance of the limitation in the mind which falls within the “mental processes” grouping of abstract ideas. The limitations above are steps, under BRI, that a human can also perform through mental processes such as observation and evaluation, as it is merely reciting steps of collecting and analyzing information; a human could visually inspect a handwritten image and analyze it to generate an equivalent text output. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim recites the following additional elements – “a computer-implemented method” “using a first image-to-text model having a first performance level” “using a second image-to-text model having a second performance level” “wherein the first performance level and the second performance level are configured using different tolerance levels for a handwriting variation” The additional element of “a computer-implemented method” is part of the preamble reciting a generic computer for executing the abstract evaluation. The limitations of “using a first image-to-text model having a first performance level” and “using a second image-to-text model having a second performance level” are merely additional tools to implement the abstract idea of evaluating handwriting skill without specifying a particular model architecture or unconventional processing techniques, but instead reciting generic functional components; “wherein the first performance level and the second performance level are configured using different tolerance levels for a handwriting variation” further recites parameter variations within models at a high level of generality to perform a known function. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Please see MPEP 2106.04.(d).III.C. There are no additional elements, such as for these additional elements as indicated above, that amount to significantly more than the judicial exception. Please see MPEP §2106.05. The claim is directed to an abstract idea. For all of the foregoing reasons, claim 1 does not comply with the requirements of 35 U.S.C. 101. Accordingly, the dependent claims 2-6 do not provide elements that overcome the deficiencies of the independent claim 1. Moreover, claim 4 recites, in part, a wherein clause of merely further specification of the element which it depends on, therefore not an indication of an integration of the abstract ideas into a practical application nor considered significantly more. Claim 2 recites, in part, “receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing… the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score.” These limitations recite steps that, under BRI, a human can also perform through mental processes of observation and evaluation such as, the human mind can observe a handwritten image that was generated in response to a given prompt, analyze the image to generate an equivalent text output that corresponds to the given image, and compare the recognized output to expected text or truth values; “using the first image-to-text model” is merely an additional tool to implement the abstract idea of evaluating handwriting skill without specifying a particular model architecture or unconventional processing techniques, but instead reciting generic functional computing components. Claim 3 recites, in part, “receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing… the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score.” These limitations recite steps that, under BRI, a human can also perform through mental processes of observation and evaluation such as, the human mind can observe a handwritten image of a drawing that was generated in response to a given prompt, analyze the image to generate an equivalent text output that corresponds to the given image, and compare the recognized output to expected text or truth values; “using the first image-to-text model” is merely an additional tool to implement the abstract idea of evaluating handwriting skill without specifying a particular model architecture or unconventional processing techniques, but instead reciting generic functional computing components. Claim 5 recites, in part, “generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human.” These limitations recite steps that, under BRI, a human can also perform through mental processes of observation and evaluation such as, the human mind can observe a set of text outputs corresponding to a handwritten image, and compare the differences to generate a score or ranking based on the observed differences. Claim 6 recites, in part, “generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend.” This is a limitation reciting a generic step of insignificant extra-solution/post-solution activity of data generation/gathering and does not provide significantly more. Accordingly, the dependent claims 2-6 are not patent eligible under 35 U.S.C. 101. Regarding independent claim 7 and its dependent claims 8-14, The independent claim 7 recites analogous limitations to the independent claim 1. Hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons above in the claim 1 analysis. Furthermore, claim 7 recites some additional features such as “a computer program product comprising one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by a processor to cause the processor to perform operations comprising…” The recited features are ones of generic computers and computer components recited at a high level of generality to perform generic well-known functions such as a processor processing instructions stored in a memory, etc. Accordingly, the dependent claims 8-14 do not provide elements that overcome the deficiencies of the independent claim 7. Moreover, claims 8, 9, and 12 recite, in part, wherein clauses of merely further specification of the element which each of them depends on, therefore not an indication of an integration of the abstract ideas into a practical application nor considered significantly more. The dependent claims 10-14 each recite analogous limitations to the dependent claims 2-6, hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons provided in the analysis above. Regarding independent claim 15 and its dependent claims 16-20, The independent claim 15 recites analogous limitations to the independent claim 1. Hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons above in the claim 1 analysis. Furthermore, claim 15 recites some additional features such as “a computer system comprising a processor and one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by the processor to cause the processor to perform operations comprising…” The recited features are ones of generic computers and computer components recited at a high level of generality to perform generic well-known functions such as a processor processing instructions stored in a memory, etc. Accordingly, the dependent claims 16-20 do not provide elements that overcome the deficiencies of the independent claim 15. Moreover, claim 18 recites, in part, a wherein clause of merely further specification of the element which it depends on, therefore not an indication of an integration of the abstract ideas into a practical application nor considered significantly more. The dependent claims 16-20 each recite analogous limitations to the dependent claims 2-6, hence, these analogous limitations are not 35 U.S.C. 101 eligible for the reasons provided in the analysis above. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7, 8, and 15 are rejected under 35 U.S.C.102(a)(1)/(a)(2) as being anticipated by DEMCHALK et al. (US 20240212375 A1), herein after referenced as DEMCHALK. Regarding claim 1, DEMCHALK teaches a computer-implemented method (Figs. 3-4, Paragraph [0045] – DEMCHALK discloses FIG. 3 is a flowchart illustrating a method 300 for extracting text from a document. Paragraph [0072] – DEMCHALK discloses one or more computer systems 400 may be used, for example, to implement any of the embodiments discussed herein, as well as combinations and sub-combinations thereof.) comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image (Fig. 2, Paragraph [0033] – DEMCHALK discloses in 210, document downloader 112 accesses a document for processing. Paragraph [0018] – DEMCHALK further discloses the documents can be in various formats, such as portable document format (PDF), scanned images of text, or other formats that are not necessarily stored in a machine-readable format.), the first analyzing resulting in a first text output corresponding to the handwritten image (Fig. 2, Paragraph [0035] – DEMCHALK discloses in 220, text extractor 116 extracts sets of text from the document using one or more OCR tools 114. Each set of text is extracted using a different OCR tool from OCR tools 114. For example, in some embodiments, an OCR tool extracts text by analyzing the contents of images in the document, such as PDF images. As another example, a different OCR tool extracts text by capturing images of each page of the document and then extracting the text from those images.); second analyzing, using a second image-to-text model having a second performance level, the handwritten image (Fig. 2, Paragraph [0033] – DEMCHALK discloses in 210, document downloader 112 accesses a document for processing. See also Paragraph [0018].), wherein the first performance level and the second performance level are configured using different tolerance levels for a handwriting variation (Fig. 1, Paragraph [0022] – DEMCHALK discloses document OCR system 110 has OCR tools 114. Each OCR tool in OCR tools 114 is a different OCR tool configured to use a different algorithm or technique to perform OCR on documents. In some embodiments, the differences in OCR tools 114 include that the OCR tools are for specific types of documents, specific speed of performing OCR, or variations of similar OCR algorithms.), the second analyzing resulting in a second text output corresponding to the handwritten image (Fig. 2, Paragraph [0035] – DEMCHALK discloses in 220, text extractor 116 extracts sets of text from the document using one or more OCR tools 114. Each set of text is extracted using a different OCR tool from OCR tools 114.); and generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score (Fig. 3, Paragraph [0057] – DEMCHALK discloses in 370, text extractor 116 selects a final text between the text and the different text based on the document metrics. In some embodiments, operation 370 selects between the text selected in operation 340 and the different text based on the same document metric(s) used in operation 340. In some embodiments, operation 370 selects between the text selected in operation 340 and the different text based on different document metric(s) used in operation 340.). Regarding claim 7, DEMCHALK teaches a computer program product comprising one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media (Fig. 4, Paragraph [0084] – DEMCHALK discloses a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer usable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 400, main memory 408, secondary memory 410, and removable storage units 418 and 422, as well as tangible articles of manufacture embodying any combination of the foregoing.), the program instructions executable by a processor to cause the processor to perform operations (Fig. 4, Paragraph [0084] – DEMCHALK discloses such control logic, when executed by one or more data processing devices (such as computer system 400), may cause such data processing devices to operate as described herein.) comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image (Fig. 2, Paragraph [0033] – DEMCHALK discloses in 210, document downloader 112 accesses a document for processing. Paragraph [0018] – DEMCHALK further discloses the documents can be in various formats, such as portable document format (PDF), scanned images of text, or other formats that are not necessarily stored in a machine-readable format.), the first analyzing resulting in a first text output corresponding to the handwritten image (Fig. 2, Paragraph [0035] – DEMCHALK discloses in 220, text extractor 116 extracts sets of text from the document using one or more OCR tools 114. Each set of text is extracted using a different OCR tool from OCR tools 114. For example, in some embodiments, an OCR tool extracts text by analyzing the contents of images in the document, such as PDF images. As another example, a different OCR tool extracts text by capturing images of each page of the document and then extracting the text from those images.); second analyzing, using a second image-to-text model having a second performance level, the handwritten image (Fig. 2, Paragraph [0033] – DEMCHALK discloses in 210, document downloader 112 accesses a document for processing. See also Paragraph [0018].), wherein the first performance level and the second performance level are configured using different tolerance levels for a handwriting variation (Fig. 1, Paragraph [0022] – DEMCHALK discloses document OCR system 110 has OCR tools 114. Each OCR tool in OCR tools 114 is a different OCR tool configured to use a different algorithm or technique to perform OCR on documents. In some embodiments, the differences in OCR tools 114 include that the OCR tools are for specific types of documents, specific speed of performing OCR, or variations of similar OCR algorithms.), the second analyzing resulting in a second text output corresponding to the handwritten image (Fig. 2, Paragraph [0035] – DEMCHALK discloses in 220, text extractor 116 extracts sets of text from the document using one or more OCR tools 114. Each set of text is extracted using a different OCR tool from OCR tools 114.); and generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score (Fig. 3, Paragraph [0057] – DEMCHALK discloses in 370, text extractor 116 selects a final text between the text and the different text based on the document metrics. In some embodiments, operation 370 selects between the text selected in operation 340 and the different text based on the same document metric(s) used in operation 340. In some embodiments, operation 370 selects between the text selected in operation 340 and the different text based on different document metric(s) used in operation 340.). Regarding claim 8, DEMCHALK teaches the computer program product of claim 7, DEMCHALK further teaches wherein the stored program instructions are stored in a computer readable storage device in a data processing system (Fig. 4, Paragraph [0078] – DEMCHALK discloses removable storage unit 418 may include a computer usable or readable storage device having stored thereon computer software (control logic) and/or data.), and wherein the stored program instructions are transferred over a network from a remote data processing system (Fig. 4, Paragraph [0080] – DEMCHALK discloses computer system 400 may further include a communication or network interface 424. Communication interface 424 may enable computer system 400 to communicate and interact with any combination of external devices, external networks, external entities, etc. (individually and collectively referenced by reference number 428). For example, communication interface 424 may allow computer system 400 to communicate with external or remote devices 428 over communications path 426, which may be wired and/or wireless (or a combination thereof), and which may include any combination of LANs, WANs, the Internet, etc.). Regarding claim 15, DEMCHALK teaches a computer system comprising a processor (Fig. 4, #400 called computer system, Paragraph [0073] – DEMCHALK discloses computer system 400 may include one or more processors (also called central processing units, or CPUs), such as a processor 404.) and one or more computer readable storage media (Paragraph [0074] – DEMCHALK discloses removable storage unit 418 may include a computer usable or readable storage device having stored thereon computer software (control logic) and/or data.), and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by the processor to cause the processor to perform operations (Fig. 4, Paragraph [0084] – DEMCHALK discloses a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer usable or readable medium having control logic (software) stored thereon may also be referred to herein as a computer program product or program storage device. Such control logic, when executed by one or more data processing devices (such as computer system 400), may cause such data processing devices to operate as described herein.) comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image (Fig. 2, Paragraph [0033] – DEMCHALK discloses in 210, document downloader 112 accesses a document for processing. Paragraph [0018] – DEMCHALK further discloses the documents can be in various formats, such as portable document format (PDF), scanned images of text, or other formats that are not necessarily stored in a machine-readable format.), the first analyzing resulting in a first text output corresponding to the handwritten image (Fig. 2, Paragraph [0035] – DEMCHALK discloses in 220, text extractor 116 extracts sets of text from the document using one or more OCR tools 114. Each set of text is extracted using a different OCR tool from OCR tools 114. For example, in some embodiments, an OCR tool extracts text by analyzing the contents of images in the document, such as PDF images. As another example, a different OCR tool extracts text by capturing images of each page of the document and then extracting the text from those images.); second analyzing, using a second image-to-text model having a second performance level, the handwritten image (Fig. 2, Fig. 2, Paragraph [0033] – DEMCHALK discloses in 210, document downloader 112 accesses a document for processing. See also Paragraph [0018].), wherein the first performance level and the second performance level are configured using different tolerance levels for a handwriting variation (Fig. 1, Paragraph [0022] – DEMCHALK discloses document OCR system 110 has OCR tools 114. Each OCR tool in OCR tools 114 is a different OCR tool configured to use a different algorithm or technique to perform OCR on documents. In some embodiments, the differences in OCR tools 114 include that the OCR tools are for specific types of documents, specific speed of performing OCR, or variations of similar OCR algorithms.), the second analyzing resulting in a second text output corresponding to the handwritten image (Fig. 2, Paragraph [0035] – DEMCHALK discloses in 220, text extractor 116 extracts sets of text from the document using one or more OCR tools 114. Each set of text is extracted using a different OCR tool from OCR tools 114.); and generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score (Fig. 3, Paragraph [0057] – DEMCHALK discloses in 370, text extractor 116 selects a final text between the text and the different text based on the document metrics [wherein metrics is score]. In some embodiments, operation 370 selects between the text selected in operation 340 and the different text based on the same document metric(s) used in operation 340. In some embodiments, operation 370 selects between the text selected in operation 340 and the different text based on different document metric(s) used in operation 340.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 5, 6, 10, 13, 14, 16, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over DEMCHALK (US 20240212375 A1), hereinafter referenced as DEMCHALK in view of WANG (US 20180293434 A1), hereinafter referenced as WANG. Regarding claim 2, DEMCHALK teaches the computer-implemented method of claim 1, DEMCHALK fails to explicitly teach further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. However, WANG explicitly teaches further comprising: receiving a second handwritten image (Fig. 10, Paragraph [0175] – WANG discloses the user interface may receive interactions in response to providing the first test item, including handwritten user input. Digital representations of the handwritten user input may be transmitted by the assessment application 1004 back to the content management servers 102 dynamically or at a predetermined time associated with the assessment.), the second handwritten image generated in response to a first prompt (Fig. 1, Paragraph [0069] – WANG discloses a user can receive content from the content distribution network 100 and can, subsequent to receiving that content, provide a response to the received content. In some embodiments, for example, the received content can comprise one or several questions, prompts, or the like, and the response to the received content can comprise an answer to those one or several questions, prompts, or the like.), the first prompt comprising a portion of text to handwrite (Fig. 9, Paragraph [0145] – WANG discloses the assessment processor 904 may generate text, images, audio, video, or other digital data to initiate and interaction from the user. The data may comprise a test item associated with an assessment. In some examples, the assessment processor 904 may receive one or more responses to the provided test item as part of the assessment.); third analyzing, using the first image-to-text model, the second handwritten image (Fig. 13, Paragraph [0204] – WANG discloses the system may receive a plurality of responses 1320 that are compared with an alphabet, letters, or other characters.), the third analyzing resulting in a third text output corresponding to the second handwritten image (Fig. 15, Paragraph [0226] – WANG discloses in illustration 1500, the content management server 102 may receive a digital representation of handwritten user input 1510. The content management server 102 may determine the X-coordinate and Y-coordinate and generate a feature set from the derivatives of these values. One or more scores may be generated for individual characters or words overall, in order to generate a word prediction based on character level hypothesis and/or the scores. Once the word is predicted as the test response, the word may be submitted for scoring purposes.); and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score (Fig. 18, Paragraph [0261] – WANG discloses at 1808, the first response score may be compared with a threshold. For example, the content management server 102 may compare the first response score determined from the automated handwriting assessment method with a spelling accuracy threshold value and/or a letter accuracy threshold value.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer-implemented method comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of WANG of having further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. Wherein having DEMCHALK’s computer-implemented method wherein further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. The motivation behind the modification would have been to obtain a computer-implemented method for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 5, DEMCHALK teaches the computer-implemented method of claim 1, DEMCHALK fails to explicitly teach further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. However, WANG explicitly teaches further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score (Fig. 14, Paragraph [0219] – WANG discloses at 1410, one or more scores may be generated. The one or more scores may comprise each state's observation score, a ground truth model score, or a test response in association with the handwritten user input.), wherein the fifth text output comprises an analysis of the handwritten image produced by a human (Fig. 14, Paragraph [0187] – WANG discloses a model response score may be determined, in some examples, by a human assessor or in-house annotator.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer-implemented method comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of WANG of having further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. Wherein having DEMCHALK’s computer-implemented method wherein further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. The motivation behind the modification would have been to obtain a computer-implemented method for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 6, DEMCHALK teaches the computer-implemented method of claim 1, DEMCHALK fails to explicitly teach further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. However, WANG explicitly teaches further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend (Fig. 3, Paragraph [0063] – WANG discloses the user profile data store 301 can further include information identifying one or several user skill levels. In some embodiments, these one or several user skill levels can identify a skill level determined based on past performance by the user interacting with the content delivery network 100, and in some embodiments, these one or several user skill levels can identify a predicted skill level determined based on past performance by the user interacting with the content delivery network 100 and one or several predictive models.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer-implemented method comprising: generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score, with the teachings of WANG of having further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. Wherein having DEMCHALK’s computer-implemented method wherein further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. The motivation behind the modification would have been to obtain a computer-implemented method for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 10, DEMCHALK teaches the computer program product of claim 7, DEMCHALK fails to explicitly teach further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. However, WANG explicitly teaches further comprising: receiving a second handwritten image (Fig. 10, Paragraph [0175] – WANG discloses the user interface may receive interactions in response to providing the first test item, including handwritten user input. Digital representations of the handwritten user input may be transmitted by the assessment application 1004 back to the content management servers 102 dynamically or at a predetermined time associated with the assessment.), the second handwritten image generated in response to a first prompt (Fig. 1, Paragraph [0069] – WANG discloses a user can receive content from the content distribution network 100 and can, subsequent to receiving that content, provide a response to the received content. In some embodiments, for example, the received content can comprise one or several questions, prompts, or the like, and the response to the received content can comprise an answer to those one or several questions, prompts, or the like.), the first prompt comprising a portion of text to handwrite (Fig. 9, Paragraph [0145] – WANG discloses the assessment processor 904 may generate text, images, audio, video, or other digital data to initiate and interaction from the user. The data may comprise a test item associated with an assessment. In some examples, the assessment processor 904 may receive one or more responses to the provided test item as part of the assessment.); third analyzing, using the first image-to-text model, the second handwritten image (Fig. 13, Paragraph [0204] – WANG discloses the system may receive a plurality of responses 1320 that are compared with an alphabet, letters, or other characters.), the third analyzing resulting in a third text output corresponding to the second handwritten image (Fig. 15, Paragraph [0226] – WANG discloses in illustration 1500, the content management server 102 may receive a digital representation of handwritten user input 1510. The content management server 102 may determine the X-coordinate and Y-coordinate and generate a feature set from the derivatives of these values. One or more scores may be generated for individual characters or words overall, in order to generate a word prediction based on character level hypothesis and/or the scores. Once the word is predicted as the test response, the word may be submitted for scoring purposes.); and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score (Fig. 18, Paragraph [0261] – WANG discloses at 1808, the first response score may be compared with a threshold. For example, the content management server 102 may compare the first response score determined from the automated handwriting assessment method with a spelling accuracy threshold value and/or a letter accuracy threshold value.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer program product comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of WANG of having further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. Wherein having DEMCHALK’s computer program product wherein further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. The motivation behind the modification would have been to obtain a computer program product for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 13, DEMCHALK teaches the computer program product of claim 7, DEMCHALK fails to explicitly teach further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. However, WANG explicitly teaches further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score (Fig. 14, Paragraph [0219] – WANG discloses at 1410, one or more scores may be generated. The one or more scores may comprise each state's observation score, a ground truth model score, or a test response in association with the handwritten user input.), wherein the fifth text output comprises an analysis of the handwritten image produced by a human (Fig. 14, Paragraph [0187] – WANG discloses a model response score may be determined, in some examples, by a human assessor or in-house annotator.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer program product comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of WANG of having further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. Wherein having DEMCHALK’s computer program product wherein further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. The motivation behind the modification would have been to obtain a computer program product for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 14, DEMCHALK teaches the computer program product of claim 7, DEMCHALK fails to explicitly teach further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. However, WANG explicitly teaches further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend (Fig. 3, Paragraph [0063] – WANG discloses the user profile data store 301 can further include information identifying one or several user skill levels. In some embodiments, these one or several user skill levels can identify a skill level determined based on past performance by the user interacting with the content delivery network 100, and in some embodiments, these one or several user skill levels can identify a predicted skill level determined based on past performance by the user interacting with the content delivery network 100 and one or several predictive models.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer program product comprising: generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score, with the teachings of WANG of having further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. Wherein having DEMCHALK’s computer program product wherein further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. The motivation behind the modification would have been to obtain a computer program product for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 16, DEMCHALK teaches the computer system of claim 15, DEMCHALK fails to explicitly teach further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. However, WANG explicitly teaches further comprising: receiving a second handwritten image (Fig. 10, Paragraph [0175] – WANG discloses the user interface may receive interactions in response to providing the first test item, including handwritten user input. Digital representations of the handwritten user input may be transmitted by the assessment application 1004 back to the content management servers 102 dynamically or at a predetermined time associated with the assessment.), the second handwritten image generated in response to a first prompt (Fig. 1, Paragraph [0069] – WANG discloses a user can receive content from the content distribution network 100 and can, subsequent to receiving that content, provide a response to the received content. In some embodiments, for example, the received content can comprise one or several questions, prompts, or the like, and the response to the received content can comprise an answer to those one or several questions, prompts, or the like.), the first prompt comprising a portion of text to handwrite (Fig. 9, Paragraph [0145] – WANG discloses the assessment processor 904 may generate text, images, audio, video, or other digital data to initiate and interaction from the user. The data may comprise a test item associated with an assessment. In some examples, the assessment processor 904 may receive one or more responses to the provided test item as part of the assessment.); third analyzing, using the first image-to-text model, the second handwritten image (Fig. 13, Paragraph [0204] – WANG discloses the system may receive a plurality of responses 1320 that are compared with an alphabet, letters, or other characters.), the third analyzing resulting in a third text output corresponding to the second handwritten image (Fig. 15, Paragraph [0226] – WANG discloses in illustration 1500, the content management server 102 may receive a digital representation of handwritten user input 1510. The content management server 102 may determine the X-coordinate and Y-coordinate and generate a feature set from the derivatives of these values. One or more scores may be generated for individual characters or words overall, in order to generate a word prediction based on character level hypothesis and/or the scores. Once the word is predicted as the test response, the word may be submitted for scoring purposes.); and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score (Fig. 18, Paragraph [0261] – WANG discloses at 1808, the first response score may be compared with a threshold. For example, the content management server 102 may compare the first response score determined from the automated handwriting assessment method with a spelling accuracy threshold value and/or a letter accuracy threshold value.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer system comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of WANG of having further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. Wherein having DEMCHALK’s computer system wherein further comprising: receiving a second handwritten image, the second handwritten image generated in response to a first prompt, the first prompt comprising a portion of text to handwrite; third analyzing, using the first image-to-text model, the second handwritten image, the third analyzing resulting in a third text output corresponding to the second handwritten image; and generating, by analyzing a difference between the third text output and the portion of text, a second handwriting skill score. The motivation behind the modification would have been to obtain a computer system for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 19, DEMCHALK teaches the computer system of claim 15, DEMCHALK fails to explicitly teach further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. However, WANG explicitly teaches further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score (Fig. 14, Paragraph [0219] – WANG discloses at 1410, one or more scores may be generated. The one or more scores may comprise each state's observation score, a ground truth model score, or a test response in association with the handwritten user input.), wherein the fifth text output comprises an analysis of the handwritten image produced by a human (Fig. 14, Paragraph [0187] – WANG discloses a model response score may be determined, in some examples, by a human assessor or in-house annotator.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer system comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of WANG of having further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. Wherein having DEMCHALK’s computer system wherein further comprising: generating, by analyzing a difference between the first text output and a fifth text output corresponding to the handwritten image, a handwriting skill score, wherein the fifth text output comprises an analysis of the handwritten image produced by a human. The motivation behind the modification would have been to obtain a computer system for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Regarding claim 20, DEMCHALK teaches the computer system of claim 15, DEMCHALK fails to explicitly teach further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. However, WANG explicitly teaches further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend (Fig. 3, Paragraph [0063] – WANG discloses the user profile data store 301 can further include information identifying one or several user skill levels. In some embodiments, these one or several user skill levels can identify a skill level determined based on past performance by the user interacting with the content delivery network 100, and in some embodiments, these one or several user skill levels can identify a predicted skill level determined based on past performance by the user interacting with the content delivery network 100 and one or several predictive models.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer system comprising: generating, by analyzing a difference between the first text output and the second text output, a handwriting skill score, with the teachings of WANG of having further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. Wherein having DEMCHALK’s computer system wherein further comprising: generating, by comparing the handwriting skill score to a previous handwriting skill score, a skill trend. The motivation behind the modification would have been to obtain a computer system for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and WANG relate to systems and methods for text extraction and writing assessment, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and WANG discloses systems, methods, and devices to provide a digital assessment of a user's handwriting to assess the user's knowledge of a language, and infer objective scores or scale for real-time analysis of a plurality of user devices. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and WANG (US 20180293434 A1), Paragraph [0026, 0028]. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over DEMCHALK (US 20240212375 A1), hereinafter referenced as DEMCHALK in view of SLINKOWSKY (US 20220301376 A1), hereinafter referenced as SLINKOWSKY. Regarding claim 9, DEMCHALK teaches the computer program product of claim 7, wherein the stored program instructions are stored in a computer readable storage device in a server data processing system (Fig. 1, paragraph [0021] – DEMCHALK discloses data storage 135 is a server, computer, hard drive, or other non-transitory storage system designed to store data, such as some or all of a set of documents to be processed by document OCR system 110.), and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system (Fig. 4, Paragraph [0079] – DEMCHALK discloses secondary memory 410 may include other means, devices, components, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 400. Such means, devices, components, instrumentalities or other approaches may include, for example, a removable storage unit 422 and an interface 420. See also Paragraph [0080].), DEMCHALK fails to explicitly teach further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. However, SLINKOWSKY explicitly teaches further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use (Fig. 4, Paragraph [0074] – SLINKOWSKY discloses the program instructions are stored in a computer-readable storage medium in a server data processing system, and downloaded over a network to a remote data processing system for use in a computer-readable storage medium associated with the remote data processing system, and further comprise program instructions to meter usage of computer usable code in response to a request for the usage, and generate one or more invoices based on the metered usage.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer program product comprising one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable by a processor, with the teachings of SLINKOWSKY of having further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. Wherein having DEMCHALK’s computer program product wherein further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. The motivation behind the modification would have been to obtain a computer program product for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and SLINKOWSKY disclose computer-readable storage mediums storing program instructions, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and SLINKOWSKY discloses systems, methods, and/or computer program products for digital data record authentication that eliminate or minimize likelihood of hacking, tampering, corrupting and misappropriating of votes and voting data associated with digital voting transactions, and further comprise program instructions to meter usage of computer usable code in response to a request for the usage, and generate one or more invoices based on the metered usage. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and SLINKOWSKY (US 20220301376 A1), Paragraph [0024, 0055]. Claims 3, 4, 11, 12, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over DEMCHALK (US 20240212375 A1), hereinafter referenced as DEMCHALK in view of MALVIYA (US 20240362937 A1), hereinafter referenced as MALVIYA. Regarding claim 3, DEMCHALK teaches the computer-implemented method of claim 1, DEMCHALK fails to explicitly teach further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. However, MALVIYA explicitly teaches further comprising: receiving a third handwritten image (Fig. 2, Paragraph [0059] – MALVIYA discloses at operation 202, the text recognition platform 106 can obtain an image file.), the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw (Fig. 1, Paragraph [0022] – MALVIYA discloses the visual data set can comprise information corresponding to one image file or a related collection of image files (e.g., medical records, a set of images associated with a particular healthcare provider). The visual data set can include handwritten items, hand-drawn items, stylus-written items, stylus-drawn items, photographs, diagrams, and so forth.); fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image (Fig. 1C, Paragraph [0030] – MALVIYA discloses FIG. 1C shows an example prompt 190, including a query 192 and an associated output 194 generated by the text recognition platform in accordance with some implementations of the present technology.); and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score (Fig. 3C, Paragraph [0097] – MALVIYA discloses a region proposal network 356a or 356b can include a convolutional neural network that predicts object bounds and objectness scores (scores that measure how well various locations and classes of objects, such as characters and sequences, are identified at various positions within an image).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer-implemented method comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of MALVIYA of having further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. Wherein having DEMCHALK’s computer-implemented method wherein further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. The motivation behind the modification would have been to obtain a computer-implemented method for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and MALVIYA relate to systems and methods for text extraction and recognition, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and MALVIYA discloses systems, methods, and computer-readable media for a text recognition platform that can pre-process information in image files to improve accuracy in response to prompts (e.g., user prompts) that seek to extract specific information from image files; the platform can improve the robustness of its text recognition technique(s) and capture characters and text in a variety of conditions. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and MALVIYA (US 20240362937 A1), Paragraph [0016, 0018]. Regarding claim 4, DEMCHALK in view of MALVIYA teach the computer-implemented method of claim 3, DEMCHALK fails to explicitly teach wherein the object comprises a geometric shape. However, MALVIYA explicitly teaches wherein the object comprises a geometric shape (Fig. 3C, Paragraph [0094] – MALVIYA discloses the region encoder 120 can accept the input image 346 and extract features from the input image 346 in a prompt-dependent manner through the feature extractor 350a and the feature extractor 350b. Paragraph [0095] – MALVIYA further discloses the feature extractor can include extraction of features (e.g., geometric or other features within the image).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK in view of MALVIYA of having a computer-implemented method comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image; second analyzing, using a second image-to-text model having a second performance level, the handwritten image, with the teachings of MALVIYA of having wherein the object comprises a geometric shape. Wherein having DEMCHALK’s computer-implemented method wherein the object comprises a geometric shape. The motivation behind the modification would have been to obtain a computer-implemented method for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and MALVIYA relate to systems and methods for text extraction and recognition, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and MALVIYA discloses systems, methods, and computer-readable media for a text recognition platform that can pre-process information in image files to improve accuracy in response to prompts (e.g., user prompts) that seek to extract specific information from image files; the platform can improve the robustness of its text recognition technique(s) and capture characters and text in a variety of conditions. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and MALVIYA (US 20240362937 A1), Paragraph [0016, 0018]. Regarding claim 11, DEMCHALK teaches the computer program product of claim 7, DEMCHALK fails to explicitly teach further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. However, MALVIYA explicitly teaches further comprising: receiving a third handwritten image (Fig. 2, Paragraph [0059] – MALVIYA discloses at operation 202, the text recognition platform 106 can obtain an image file.), the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw (Fig. 1, Paragraph [0022] – MALVIYA discloses the visual data set can comprise information corresponding to one image file or a related collection of image files (e.g., medical records, a set of images associated with a particular healthcare provider). The visual data set can include handwritten items, hand-drawn items, stylus-written items, stylus-drawn items, photographs, diagrams, and so forth.); fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image (Fig. 1C, Paragraph [0030] – MALVIYA discloses FIG. 1C shows an example prompt 190, including a query 192 and an associated output 194 generated by the text recognition platform in accordance with some implementations of the present technology.); and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score (Fig. 3C, Paragraph [0097] – MALVIYA discloses a region proposal network 356a or 356b can include a convolutional neural network that predicts object bounds and objectness scores (scores that measure how well various locations and classes of objects, such as characters and sequences, are identified at various positions within an image).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer program product comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of MALVIYA of having further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. Wherein having DEMCHALK’s computer program product wherein further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. The motivation behind the modification would have been to obtain a computer program product for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and MALVIYA relate to systems and methods for text extraction and recognition, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and MALVIYA discloses systems, methods, and computer-readable media for a text recognition platform that can pre-process information in image files to improve accuracy in response to prompts (e.g., user prompts) that seek to extract specific information from image files; the platform can improve the robustness of its text recognition technique(s) and capture characters and text in a variety of conditions. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and MALVIYA (US 20240362937 A1), Paragraph [0016, 0018]. Regarding claim 12, DEMCHALK in view of MALVIYA teach the computer program product of claim 11, DEMCHALK fails to explicitly teach wherein the object comprises a geometric shape. However, MALVIYA explicitly teaches wherein the object comprises a geometric shape (Fig. 3C, Paragraph [0094] – MALVIYA discloses the region encoder 120 can accept the input image 346 and extract features from the input image 346 in a prompt-dependent manner through the feature extractor 350a and the feature extractor 350b. Paragraph [0095] – MALVIYA further discloses the feature extractor can include extraction of features (e.g., geometric or other features within the image).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK in view of MALVIYA of having a computer program product comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image; second analyzing, using a second image-to-text model having a second performance level, the handwritten image, with the teachings of MALVIYA of having wherein the object comprises a geometric shape. Wherein having DEMCHALK’s computer program product wherein the object comprises a geometric shape. The motivation behind the modification would have been to obtain a computer program product for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and MALVIYA relate to systems and methods for text extraction and recognition, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and MALVIYA discloses systems, methods, and computer-readable media for a text recognition platform that can pre-process information in image files to improve accuracy in response to prompts (e.g., user prompts) that seek to extract specific information from image files; the platform can improve the robustness of its text recognition technique(s) and capture characters and text in a variety of conditions. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and MALVIYA (US 20240362937 A1), Paragraph [0016, 0018]. Regarding claim 17, DEMCHALK teaches the computer system of claim 15, DEMCHALK fails to explicitly teach further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. However, MALVIYA explicitly teaches further comprising: receiving a third handwritten image (Fig. 2, Paragraph [0059] – MALVIYA discloses at operation 202, the text recognition platform 106 can obtain an image file.), the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw (Fig. 1, Paragraph [0022] – MALVIYA discloses the visual data set can comprise information corresponding to one image file or a related collection of image files (e.g., medical records, a set of images associated with a particular healthcare provider). The visual data set can include handwritten items, hand-drawn items, stylus-written items, stylus-drawn items, photographs, diagrams, and so forth.); fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image (Fig. 1C, Paragraph [0030] – MALVIYA discloses FIG. 1C shows an example prompt 190, including a query 192 and an associated output 194 generated by the text recognition platform in accordance with some implementations of the present technology.); and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score (Fig. 3C, Paragraph [0097] – MALVIYA discloses a region proposal network 356a or 356b can include a convolutional neural network that predicts object bounds and objectness scores (scores that measure how well various locations and classes of objects, such as characters and sequences, are identified at various positions within an image).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK of having a computer system comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image, with the teachings of MALVIYA of having further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. Wherein having DEMCHALK’s computer system wherein further comprising: receiving a third handwritten image, the third handwritten image generated in response to a second prompt, the second prompt comprising an object to hand draw; fourth analyzing, using the first image-to-text model, the third handwritten image, the fourth analyzing resulting in a fourth text output corresponding to the third handwritten image; and generating, by analyzing a difference between the fourth text output and the object, a third handwriting skill score. The motivation behind the modification would have been to obtain a computer system for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and MALVIYA relate to systems and methods for text extraction and recognition, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and MALVIYA discloses systems, methods, and computer-readable media for a text recognition platform that can pre-process information in image files to improve accuracy in response to prompts (e.g., user prompts) that seek to extract specific information from image files; the platform can improve the robustness of its text recognition technique(s) and capture characters and text in a variety of conditions. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and MALVIYA (US 20240362937 A1), Paragraph [0016, 0018]. Regarding claim 18, DEMCHALK in view of MALVIYA teach the computer system of claim 17, DEMCHALK fails to explicitly teach wherein the object comprises a geometric shape. However, MALVIYA explicitly teaches wherein the object comprises a geometric shape (Fig. 3C, Paragraph [0094] – MALVIYA discloses the region encoder 120 can accept the input image 346 and extract features from the input image 346 in a prompt-dependent manner through the feature extractor 350a and the feature extractor 350b. Paragraph [0095] – MALVIYA further discloses the feature extractor can include extraction of features (e.g., geometric or other features within the image).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of DEMCHALK in view of MALVIYA of having a computer system comprising: first analyzing, using a first image-to-text model having a first performance level, a handwritten image, the first analyzing resulting in a first text output corresponding to the handwritten image; second analyzing, using a second image-to-text model having a second performance level, the handwritten image, with the teachings of MALVIYA of having wherein the object comprises a geometric shape. Wherein having DEMCHALK’s computer system wherein the object comprises a geometric shape. The motivation behind the modification would have been to obtain a computer system for assessing handwriting skill by using one or more OCR tools wherein each OCR tool may differ in how it extracts the text, such as using completely different approaches or algorithms to extract writing from images with improved accuracy, since both DEMCHALK and MALVIYA relate to systems and methods for text extraction and recognition, wherein DEMCHALK discloses systems, methods and computer program products for extracting text from the same document using different OCR tools, comparing the extracted text to identify the best extracted text from the different tools based on certain metrics or characteristics of the extracted text, wherein the text extraction further includes error correction tailored for known OCR errors that are not typically fixed by spellchecker software, improving the quality of the extracted text, and MALVIYA discloses systems, methods, and computer-readable media for a text recognition platform that can pre-process information in image files to improve accuracy in response to prompts (e.g., user prompts) that seek to extract specific information from image files; the platform can improve the robustness of its text recognition technique(s) and capture characters and text in a variety of conditions. Please see DEMCHALK (US 20240212375 A1), Paragraph [0035, 0048], and MALVIYA (US 20240362937 A1), Paragraph [0016, 0018]. Examiner Remarks With respect to claims 7, 8, 9, and 15, along with their depending claims examiner understands “computer readable storage media” to be a non-transitory unit and not a signal per se, as disclosed in the applicant’s specification, Paragraph [0030] – “A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored.” Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. PATZER et al. (US 20230252230 A1)- A system can include a data store, including a subscriber list, and a computing device in communication therewith. The computing device can perform optical character recognition of an image to identify text portions, each text portion including position data and a subset of text portions including handwritten data. The computing device can associate each text portion with a respective column of a plurality of columns and a respective row of a plurality of rows based on the position data. The computing device can analyze individual ones of text portions associated with a particular column to determine a particular data type corresponding to the particular column. The computing device can generate a respective entry to the subscriber list for individual ones of a subset of the plurality of rows with individual ones of the text portions from the particular column being stored in a field associated with the particular data type....... Fig. 1, 3, Abstract. ATTAR et al. (US 20230214428 A1)- A system may iteratively scan a portion of a document, extract first data from the portion of the document, and determine, using a trained model, whether the first data corresponds to one or more document types based on one or more confidence thresholds. The system may repeat this process, increasing the portion of the document scanned by a predetermined amount each iteration, until the first data corresponds to the one or more document types based on the one or more confidence thresholds. Responsive to determining the first data corresponds to the one or more document types based on the one or more confidence thresholds, the system may cause a graphical user interface (GUI) of a user device to display a notification indicating a document type match... Fig. 3, 4, Abstract. HE et al. (US 20230037272 A1)- A handwritten content removing method and device and a storage medium. The handwritten content removing method comprises: acquiring an input image of a text page to be processed, the input image comprising a handwritten region, which comprises a handwritten content (S10); identifying the input image so as to determine the handwritten content in the handwritten region (S11); and removing the handwritten content in the input image so as to obtain an output image (S12)…. Fig. 1, 2A, Abstract. MANNBY et al. (US 20220375244 A1)- Examples described herein generally relate to systems and methods for handwriting recognition. In an example, a computing device may receive input corresponding to a handwritten word and apply first recognition model to the input. The first recognition model may be configured to determine a first confidence level of a first portion of the input is greater than a second confidence level of a second portion of the input. The computing device may also apply a second recognition model to the input, wherein the second recognition model is different from the first recognition model and combine results of the first recognition model and the second recognition model to determine a list of candidate words. The computing device may also output one or more candidate words from the list of candidate words.... Fig. 1, 3, Abstract. CASSUTO et al. (US 20200251217 A1) – Handwriting analysis is provided by data analysis using machine learning. A handwriting sample is received and the sample is analyzed by one or more analysis components that can include one or more of: segmentation analysis of handwriting with numeric extraction of data, vector analysis of handwriting, demographic data, known diagnoses, data from other manual/motor tasks, and data from other cognitive/higher function tasks. Machine learning is used to adjust or add criteria in at least one of the analysis components, the machine learning comprising a predicted probability of diagnosis based on prior handwriting analysis samples… Fig. 1A, 1B, Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEZAWIT N SHIMELES whose telephone number is (571)272-7663. The examiner can normally be reached M-F 7:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEZAWIT NOLAWI SHIMELES/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner , Art Unit 2673
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month