Prosecution Insights
Last updated: April 19, 2026
Application No. 18/559,016

COLLABORATIVE ARTIFICIAL INTELLIGENCE ANNOTATION PLATFORM LEVERAGING BLOCKCHAIN FOR MEDICAL IMAGING

Final Rejection §101§103
Filed
Nov 03, 2023
Examiner
PAULS, JOHN A
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The General Hospital Corporation
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
76%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
404 granted / 829 resolved
-3.3% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
46 currently pending
Career history
875
Total Applications
across all art units

Statute-Specific Performance

§101
28.8%
-11.2% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 829 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of Claims This action is in reply to the communication filed on 9 October, 2025. Claims 1, 4, 10, 16, 19 – 22, 24 and 26 have been amended. Claims 11 and 12 have been cancelled. Claims 1 – 10 and 13 - 26 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The following rejection is formatted in accordance with MPEP 2106. Claims 1 – 10, 13 - 21 and 26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. a law of nature, a natural phenomenon, or an abstract idea), and does not include additional elements that either: 1) integrate the abstract idea into a practical application, or 2) that provide an inventive concept – i.e. element that amount to significantly more than the abstract idea. The Claims are directed to an abstract idea because, when considered as a whole, the plain focus of the claims is on an abstract idea. CLAIMS 1 – 10 and 13 - 21 Claim 16 is representative. Claim 16 recites: A method of providing a collaborative annotation platform, the method comprising: enabling, with an electronic processor, access to a collaborative annotation project associated with at least one medical image; receiving, with the electronic processor, crowdsourced annotations associated with the at least one medical image from a set of annotators; evaluating, with the electronic processor, the crowdsourced annotations; and generating, with the electronic processor, an annotation record associated with the at least one medical image based on the evaluation of the crowdsourced annotations; generating, with the electronic processor, hash information for the annotation record; generating, with the electronic processor, a block based on the hash information; and subsequent to verification of the block using a proof of work procedure, adding, with the electronic processor, the block to a blockchain. Claim 1 recites a system that executes the steps of the method recited in Claim 16. Claims 21 recites similar limitations including a step to define a collaborative annotation project. STEP 1 The claims are directed to a system and a method, which are included in the statutory categories of invention. STEP 2A PRONG ONE The claims, as illustrated by Claim 16, recite limitations that encompass an abstract idea within the “mental processes” grouping – concepts performed in the human mind including observation, evaluation, judgment and opinion, including: receiving crowdsourced annotations associated with the at least one medical image from a set of annotators; evaluating the crowdsourced annotations; and generating an annotation record associated with the at least one medical image based on the evaluation of the crowdsourced annotations. The claims recite receiving annotations for a medical image from a set of annotators, evaluating the annotations, and generating a record based on the evaluation. The specification discloses that annotations are received by a server (i.e. the recited processor) over a network. Annotation are made by each of the annotators in the set by entering selections, for example using an interface as shown in Figure 9. Generating annotations is not required by the claims, nonetheless, the specification discloses that annotating radiological images is a purely mental process, that is conventional in medicine. Evaluating annotations received by the system includes evaluating discrepancies between annotators. The specification discloses a “review mode” is provided for a second annotator to check result agreement between preceding annotators; and in an example, evaluates differences between three annotators (@ 0059, 0062 – 0064). Evaluating annotations may also include calculating a value based on data characteristics, time cost, and accuracy, using a mathematical relationship. Analyzing annotations by comparing to other annotations, is a process that, except for generic computer implementation steps, can be performed in the human mind. Generating the annotation record is disclosed as aggregating annotations from the set of annotators and storing the data. Collecting information, including when limited to particular content, is within the realm of abstract ideas, and analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, are mental processes within the abstract idea category (Electric Power Group v. Alstom S.A. (Fed Cir, 2015-1778, 8/1/2016). As such, the claims recite an abstract idea within the mental process grouping. The claims, as illustrated by Claim 1, also recite limitations that encompass an abstract idea within the mathematical formula or relationship grouping. The claims recite: evaluating the crowdsourced annotations. The specification discloses the annotations may be evaluated for their value or contribution to the dataset using a formula expressly disclosed in the specification. As such, the claims recite a mathematical formula or relationship. STEP 2A PRONG TWO The claims recite limitations that include additional elements beyond those that encompass the abstract idea above including: an electronic processor, enabling, with an electronic processor, access to a collaborative annotation project associated with at least one medical image; However, these additional elements do not integrate the abstract idea into a practical application of that idea in accordance with the MPEP. (see MPEP 2106.05) The processor is recited at a high level of generality such that it amounts to no more than instructions to apply the abstract idea using a generic computer component. These elements merely add instructions to implement the abstract idea on a computer, and generally link the abstract idea to a particular technological environment. Accessing an annotation project is an insignificant extra-solution activity – i.e. a data gathering step. Aggregating and storing results is an extra-solution activity. The claims recite further additional elements beyond those that encompass the abstract idea above including: generating hash information for the annotation record; generating a block based on the hash information; and subsequent to verification of the block using a proof of work procedure, adding the block to a blockchain. The claims recite adding a block, generated based on a hash of the annotation record, to a blockchain, subsequent to verification/proof of work. The specification discloses these functions at a high level of generality without including any technical details as to how they are performed. Nothing in the specification describes how a hash is generated from an annotation record; how a block is generated from the hash information; how verification using a proof of work is performed; or how the block is added to a blockchain. Nonetheless, Examiner concludes that the specification discloses the recited functions in sufficient detail such that one of ordinary skill in the blockchain arts would known how to perform the functions. The specification recognizes that blockchain technology has been widely recognized, and merely applies these techniques in a new data environment. As such, generating a block from a hash, and adding the block to a blockchain is an extra-solution record keeping activity. Nothing in the claim recites specific limitations directed to an improved computer system, processor, memory, network, database or Internet. Similarly, the specification is silent with respect to these kinds of improvements. A general purpose computer that applies a judicial exception by use of conventional computer functions, as is the case here, does not qualify as a particular machine, nor does the recitation of a generic computer impose meaningful limits in the claimed process. (see Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 716-17 (Fed. Cir. 2014)). As such, the additional elements recited in the claim do not integrate the abstract annotation process into a practical application of that process. The additional elements identified above do not amount to significantly more than the abstract annotation process. Receiving information, for example over a network, is a well-understood, routine and conventional computer function – i.e. receiving or transmitting data over a network as in Symantec, TLI, OIP and buySAFE. Recording results of the evaluation using well-known blockchain techniques, is a well-understood, routine and conventional computer function – i.e. electronic recordkeeping as in Alice and Ultramercial. Storing and retrieving information from memory is a routine and conventional computer function as in Versata and OIP Tech. The additional structural elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generic computer structure (i.e. an electronic processor). The processor is disclosed in the specification as being generic, purely conventional and/or known in the industry. Because the specification describes these additional elements in general terms, without describing particulars, Examiner concludes that the claim limitations may be broadly, but reasonably construed, as reciting well-understood, routine and conventional computer components and techniques. The specification describes the elements in a manner that indicates that they are sufficiently well-known that the specification does not need to describe the particulars in order to satisfy U.S.C. 112. Considered as an ordered combination the limitations recited in the claims add nothing that is not already present when the steps are considered individually. As such, the additional elements recited in the claim do not provide significantly more than the abstract annotation process, or an inventive concept. The dependent claims add additional features including: those that merely serve to further narrow the abstract idea above such as: further limiting the basis for determining value (Claim 10, 20); further limiting the type of annotations including a confidence level (Claim 3 - 5); further limiting the type of digital reward (Claim 7); further limiting the type of model (Claim 14, 15); those that recite additional abstract ideas such as: receiving an image, applying a ML model and generate annotations (Claim 18); determining annotator contribution and value (Claim 8 - 10, 19); determining a digital reward (Claim 6); those that recite well-understood, routine and conventional activity or computer functions such as: generating training data, developing and storing a model (Claim 13, 17); requesting annotations (Claim 2); those that recite insignificant extra-solution activities such as: or those that are an ancillary part of the abstract idea. Examiner takes Official Notice that generating training data, and developing a model, when recited at a high level of generality as here, are old and well-known and purely conventional. The limitations recited in the dependent claims, in combination with those recited in the independent claims add nothing that integrates the abstract idea into a practical application, or that amounts to significantly more. These elements merely narrow the abstract idea, recite additional abstract ideas, or append conventional activity to the abstract process. As such, the additional element do not integrate the abstract idea into a practical application, or provide an inventive concept that transforms the claims into a patent eligible invention. The apparatus claims are no different from the method claims in substance. “The equivalence of the method, system and media claims is readily apparent.” “The only difference between the claims is the form in which they were drafted.” (Bancorp). The method claims recite the abstract idea implemented on a generic computer, while the apparatus claims recite generic computer components configured to implement the same idea. Specifically, Claims 1 – 10 and 13 - 15 merely add the generic hardware noted above that nearly every computer will include. The apparatus claim’s requirement that the same method be performed with a programmed computer does not alter the method’s patentability under U.S.C. 101 (In re Grams). Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. CLAIM 26 Claim 26 is independent. Claim 26 recites: A collaborative annotation system, the system comprising: an electronic processor configured to: obtain crowdsourced annotations associated with at least one medical image from a dispersed group of annotators, evaluate the crowdsourced annotations to determine: (i) an annotation contribution to the crowdsourced annotations for each annotator included in the dispersed group of annotators; and (ii) an annotation accuracy related to the crowdsourced annotations, and generate a digital reward for at least one annotator included in the dispersed group of annotators based on a corresponding annotation contribution for the at least one annotator and the annotation accuracy related to the crowdsourced annotations. STEP 1 The claim is directed to a system, which is included in the statutory categories of invention. STEP 2A PRONG ONE Claim 26 recites limitations that encompass an abstract idea within the “mental processes” grouping – concepts performed in the human mind including observation, evaluation, judgment and opinion, including: obtain crowdsourced annotations associated with at least one medical image from a dispersed group of annotators, evaluate the crowdsourced annotations to determine: (i) an annotation contribution to the crowdsourced annotations for each annotator included in the dispersed group of annotators; and (ii) an annotation accuracy related to the crowdsourced annotations. The claims recite obtaining annotations for a medical image from a set of annotators, and evaluating the annotations. The specification discloses that annotations are received by a server (i.e. the recited processor) over a network. Annotation are made by each of the annotators in the set by entering selections, for example using an interface as shown in Figure 9. Generating annotations is not required by the claims, nonetheless, the specification discloses that annotating radiological images is a purely mental process, that is conventional in medicine. Evaluating annotations received by the system includes evaluating an annotation contribution and an annotation accuracy. Accuracy is measured by comparing the annotation to a “majority rule”. Analyzing annotations by comparing to other annotations, is a process that, except for generic computer implementation steps, can be performed in the human mind. Collecting information, including when limited to particular content, is within the realm of abstract ideas, and analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, are mental processes within the abstract idea category (Electric Power Group v. Alstom S.A. (Fed Cir, 2015-1778, 8/1/2016). As such, the claims recite an abstract idea within the mental process grouping. The claims, as illustrated by Claim 1, also recite limitations that encompass an abstract idea within the mathematical formula or relationship grouping. The claims recite: evaluating the crowdsourced annotations. The specification discloses the annotations may be evaluated for their value or contribution to the dataset, and an accuracy, using a formula expressly disclosed in the specification. As such, the claims recite a mathematical formula or relationship. The claims, as illustrated by Claim 26, recite limitations that encompass an abstract idea within the “certain methods of organizing human activity” grouping – fundamental economic principles or practices including hedging, insurance, mitigating risk; including: generate a digital reward for at least one annotator included in the dispersed group of annotators based on a corresponding annotation contribution for the at least one annotator and the annotation accuracy related to the crowdsourced annotations. The claims recite evaluation annotations from a set of annotator and determining a financial reward for their contribution. Rewarding annotators for their digital contribution and accuracy to a dataset is a fundamental economic activity – paying people for their work efforts. As such, the claims recite an abstract idea within the certain methods of organizing human activity grouping. STEP 2A PRONG TWO The claims recite limitations that include additional elements beyond those that encompass the abstract idea above including: an electronic processor, However, these additional elements do not integrate the abstract idea into a practical application of that idea in accordance with the MPEP. (see MPEP 2106.05) The processor is recited at a high level of generality such that it amounts to no more than instructions to apply the abstract idea using a generic computer component. These elements merely add instructions to implement the abstract idea on a computer, and generally link the abstract idea to a particular technological environment. Additionally, obtaining annotations is an insignificant extra-solution activity – i.e. a data gathering step. Nothing in the claim recites specific limitations directed to an improved computer system, processor, memory, network, database or Internet. Similarly, the specification is silent with respect to these kinds of improvements. A general purpose computer that applies a judicial exception by use of conventional computer functions, as is the case here, does not qualify as a particular machine, nor does the recitation of a generic computer impose meaningful limits in the claimed process. (see Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 716-17 (Fed. Cir. 2014)). As such, the additional elements recited in the claim do not integrate the abstract annotation reward process into a practical application of that process. STEP 2B The additional elements identified above do not amount to significantly more than the abstract annotation process. Obtaining information, for example over a network, is a well-understood, routine and conventional computer function – i.e. receiving or transmitting data over a network as in Symantec, TLI, OIP and buySAFE. Storing and retrieving information from memory is a routine and conventional computer function as in Versata and OIP Tech. The additional structural elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generic computer structure (i.e. an electronic processor). The processor is disclosed in the specification as being generic, purely conventional and/or known in the industry. Because the specification describes these additional elements in general terms, without describing particulars, Examiner concludes that the claim limitations may be broadly, but reasonably construed, as reciting well-understood, routine and conventional computer components and techniques. The specification describes the elements in a manner that indicates that they are sufficiently well-known that the specification does not need to describe the particulars in order to satisfy U.S.C. 112. Considered as an ordered combination the limitations recited in the claims add nothing that is not already present when the steps are considered individually. As such, the additional elements recited in the claim do not provide significantly more than the abstract annotation reward process, or an inventive concept. There are no dependent claims that add additional features. Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. CLAIMS 22 – 25 Claims 22 – 25 are not included in the rejection above because the claims do not recite any judicial exceptions. For example, Claim 22 recites accessing annotation records and generating training data therefrom. Similarly, Claim 24 is directed to accessing training data and developing a model associated with a medical image analysis function using the training data. While these features are generically recited, and may be considered conventional computer functions, the claims do not recite any limitation that represents a judicial exception. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 – 4, 13 – 19, 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Heim et al.: “Large-scale medical image annotation with crowd-powered algorithms”; 8 September, 2018; in view of Diedrich et al. (US PGPUB 2022/0270146 A1). CLAIMS 1 and 16 Heim discloses a collaborative medical image annotation system/platform, the collaborative annotation system comprising: an electronic processor; (Heim Section 1 Introduction; Section 3.2 Architecture, Figure 3); configured to: enable access to a collaborative annotation project associated with at least one medical image, (Heim 3.1 Overview, 4.2 Expert Annotations); receive crowdsourced annotations associated with the at least one medical image from a set of annotators, (Heim 2.1 & 2.2 Data Annotation, 3.4 Detection, 3.5 Refinement, 4.2 Expert Annotations); evaluate the crowdsourced annotations, (Heim 2.3 Comparison, 4 experiments, 4.3 Evaluation); and generate an annotation record associated with the at least one medical image based on the evaluation of the crowdsourced annotations, (Heim 3.1 Overview, 3.2 Architecture, 3.6 Merging). Heim discloses a web-based system and method for obtaining medical image annotations using “common crowdsourcing platforms” and techniques. A large-scale crowd of workers (i.e. annotators) can access annotation tasks related to medical images with a web application, and provide annotations based on a visual evaluation of the image using predetermined selections the workers can choose from. Annotations are evaluated against experts or reference annotations, and the annotations from different workers are merged (i.e. generate an annotation record) for later use in training, or re-training, a classifier. With respect to the following limitations: generating hash information for the annotation record; generating a block based on the hash information; and subsequent to verification of the block using a proof of work procedure, adding the block to a blockchain; (Diedrich 0030 - 0033). Heim discloses storing annotation in a database, but does not disclose storing annotations in an electronic, blockchain, ledger. Diedrich discloses a machine learning image annotation marketplace system using immutable blockchain ledgers, and their associated hash values and verification procedures. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included storing data in blockchain ledgers, in accordance with the teaching of Diedrich, in order to provide an immutable ledger. CLAIM 2 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 1. Additionally, Heim discloses the following limitations: wherein the electronic processor is configured to generate and transmit a request for annotations to each annotator associated with the collaborative annotation project, wherein the set of annotators is associated with the collaborative annotation project, (Heim 3.2 Architecture; 4.3.2 Refinement) – disclosing assessing the performance of individual annotators. CLAIMS 3 and 4 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 1. Additionally, Heim discloses the following limitations: wherein the crowdsourced annotations include a set of classification labels associated with the at least one medical image, wherein the crowdsourced annotations includes a set of object detection labels associated with the at least one medical image, wherein each classification label is associated with an annotator included in the set of annotators, (Heim 2.1, 2.2 Annotation, 4.3.2 Refinement) – disclosing object detection and classification labels for individual annotators. CLAIMS 13 - 15 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 1. Additionally, Heim discloses the following limitations: wherein the electronic processor is configured to store the crowdsourced annotations as training data for at least one machine learning model associated with medical image analysis, (Heim Figure 3, 2.1 Annotation); wherein the at least one machine learning model is a classification model; wherein the at least one machine learning model is an object detection model, (Heim 2.2 Annotation). Heim discloses storing the annotations in a database and using the merged annotations to train or retain a classifier and/or detection algorithm. CLAIMS 17 - 18 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 16. Additionally, Heim discloses the following limitations: generating training data based on the annotation record; developing a machine learning model using the training data; and storing the machine learning model, (Heim 2.1 Annotation); receiving a first medical image; applying the machine learning model to the first medical image; and generate a second medical image based on the application of the machine learning model, wherein the second medical image includes a predicted annotation for the first medical image; (Heim 3.3 Automatic Contour Initialization). Heim discloses generating training data to train and/or retrain a neural network classifier. Heim discloses receiving and analyzing medical images with the trained neural network that including generating a second image that includes a segmentation outline (i.e. an annotation). CLAIM 19 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 16. Additionally, Heim discloses the following limitations: determine the contribution of each annotator to the crowdsourced annotations by: for each annotator, determining a value associated with at least one crowdsourced annotation included in the crowdsourced annotation, wherein the at least one crowdsourced annotation is associated with an annotator included in the set of annotators, (Heim 4.1.1, 4.1.2 Crowd-Sourced Annotations); – disclosing determining a monetary value for each worker’s contribution based in part on a time metric. CLAIMS 24 and 25 Heim discloses a collaborative annotation system, the collaborative annotation system comprising: an electronic processor; (Heim Section 1 Introduction; Section 3.2 Architecture, Figure 3); configured to: access training data associated with annotation records based on crowdsourced annotations obtained for a collaborative annotation project associated with a set of medical images, and develop a model using machine learning using the training data, wherein the model is associated with a medical image analysis function; receive a medical image associated with a patient, apply the model to the medical image to determine a predicted annotation for the medical image, and generate an annotated medical image including the predicted annotation for the medical image; (Heim 2.1 Annotation). Heim discloses obtaining annotations from individual workers, aggregating and storing the annotations as training data and using the training data to train or retrain a medical image classifier. The trained classifier is applied to medical images. With respect to the following limitation: access from an immutable ledger; (Diedrich 0030 - 0033). Heim discloses storing annotation in a database for training a model, but does not disclose storing annotations in an electronic, blockchain, immutable ledger. Diedrich discloses a machine learning image annotation marketplace system using immutable blockchain ledgers, and their associated hash values and verification procedures. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included storing data in blockchain ledgers, in accordance with the teaching of Diedrich, in order to provide an immutable ledger. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Heim et al.: “Large-scale medical image annotation with crowd-powered algorithms”; 8 September, 2018; in view of Diedrich et al. (US PGPUB 2022/0270146 A1); in view of Park et al. (US PGPUB 2022/0004863 A1). CLAIM 5 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 1. With respect to the following limitations: wherein each crowdsourced annotation is associated with a confidence metric indicating a confidence level of a corresponding annotator with an associated crowdsourced annotation; (Park 0034, 0046). Park discloses a medical image annotation system that includes user specific labels indicating the confidence of the user in making the annotation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included indication confidence in the annotation, in accordance with the teaching of Park, in order to allow for more accurate training of the model. Claims 6 – 9 are rejected under 35 U.S.C. 103 as being unpatentable over Heim et al.: “Large-scale medical image annotation with crowd-powered algorithms”; 8 September, 2018; in view of Diedrich et al. (US PGPUB 2022/0270146 A1); in view of Keski-Valkama: (US PGPUB 2020/0311553 A1). CLAIMS 6 and 7 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 1. Additionally, Heim discloses the following limitations: wherein the electronic processor is configured to determine a reward for each annotator based on the evaluation of the crowdsourced annotations; (Heim 3.2 Overview, 3.2 Architecture, 4.1 Annotations). Heim discloses determining a monetary reward based on each worker’s contribution, but not a digital reward such as a cryptocurrency. Keski-Valkama (@ 0061) teaches a system and method for compensating content contributors using crowd sourcing techniques, where the value of the payment is stored in a blockchain cryptocurrency. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included rewards in blockchain cryptocurrency, in accordance with the teaching of Keski-Valkama, in order to allow for compensation in alternate forms of payment. CLAIMS 8 and 9 The combination of Heim/Diedrich/Keski-Valkama discloses the limitations shown above relative to Claim 6. Additionally, Heim discloses the following limitations: evaluate the crowdsourced annotations by determining a contribution of each annotator to the crowdsourced annotations, (Heim 4.1.1, 4.1.2 Crowd-Sourced Annotations); determine the contribution of each annotator to the crowdsourced annotations by, for each annotator, determining a value associated with at least one crowdsourced annotation included in the crowdsourced annotation, wherein the at least one crowdsourced annotation is associated with an annotator included in the set of annotators, (Heim 4.1.1, 4.1.2 Crowd-Sourced Annotations) – disclosing determining a monetary value for each worker’s contribution based in part on a time metric and annotation accuracy. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Heim et al.: “Large-scale medical image annotation with crowd-powered algorithms”; 8 September, 2018; in view of Diedrich et al. (US PGPUB 2022/0270146 A1); in view of Keski-Valkama: (US PGPUB 2020/0311553 A1) and in view of Welinder et al.: (US PGPUB 2013/0346356 A1). CLAIM 10 The combination of Heim/Diedrich/Keski-Valkama discloses the limitations shown above relative to Claim 9. With respect to the following limitations: determine the value based on a characteristic of the at least one crowdsourced annotation, a time metric of entering the at least one crowdsourced annotation, and an accuracy of the at least one crowdsourced annotation; (Welinder 0013, 0015, 0025, 0027, 0036, 0040, 0072, 0073). Heim discloses providing monetary rewards (i.e. a value) to annotators based on a time element, but does not disclose determining the value based on characteristic of the annotation, a time metric, and an accuracy of the annotation. Welinder teaches a system and method for rewarding crowdsourced annotators to compensate for labeling source data to be used in training a machine learning algorithm in a variety of applications including medical diagnosis. Welinder teaches generating rewards based on accuracy, the difficulty of the source data (i.e. characteristics of the crowdsourced annotation), and the time to make the annotation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included rewards for annotations based on difficulty, accuracy and time, in accordance with the teaching of Welinder, in order to provide an incentive to annotators. Claims 20 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Heim et al.: “Large-scale medical image annotation with crowd-powered algorithms”; 8 September, 2018; in view of Diedrich et al. (US PGPUB 2022/0270146 A1); and in view of Welinder et al.: (US PGPUB 2013/0346356 A1). CLAIM 20 The combination of Heim/Diedrich discloses the limitations shown above relative to Claim 19. With respect to the following limitations: determine the value based on a characteristic of the at least one crowdsourced annotation, a time metric of entering the at least one crowdsourced annotation, and an accuracy of the at least one crowdsourced annotation, (Welinder 0013, 0015, 0025, 0027, 0036, 0040, 0072, 0073). Heim discloses providing monetary rewards (i.e. a value) to annotators based on a time element, but does not disclose determining the value based on characteristic of the annotation, a time metric, and an accuracy of the annotation. Welinder teaches a system and method for rewarding crowdsourced annotators to compensate for labeling source data to be used in training a machine learning algorithm in a variety of applications including medical diagnosis. Welinder teaches generating rewards based on accuracy, the difficulty of the source data (i.e. characteristics of the crowdsourced annotation), and the time to make the annotation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included rewards for annotations based on difficulty, accuracy and time, in accordance with the teaching of Welinder, in order to provide an incentive to annotators. CLAIM 21 Heim discloses a collaborative medical image annotation system/platform, the collaborative annotation system comprising: an electronic processor; (Heim Section 1 Introduction; Section 3.2 Architecture, Figure 3); configured to: define a collaborative annotation project associated with a set of medical image, (Heim 3.1 Overview, 3.2 Architecture, 4.2 Expert Annotations); obtain crowdsourced annotations for the collaborative annotation project from a dispersed group of annotators, (Heim 2.1 & 2.2 Data Annotation, 3.4 Detection, 3.5 Refinement, 4.2 Expert Annotations); evaluate the crowdsourced annotations, (Heim 2.3 Comparison, 4 experiments, 4.3 Evaluation); and generate at least one annotation record based on the evaluation of the crowdsourced annotations, (Heim 3.1 Overview, 3.2 Architecture, 3.6 Merging). Heim discloses a web-based system and method for creating and distributing annotation tasks with a web application (i.e. define a collaborative annotation project) for medical images. Heim discloses obtaining medical image annotations using “common crowdsourcing platforms” and techniques. A large-scale crowd of workers (i.e. annotators) can access annotation tasks related to medical images with a web application, and provide annotations based on a visual evaluation of the image using predetermined selections the workers can choose from. Annotations are evaluated against experts or reference annotations, and the annotations from different workers are merged (i.e. generate an annotation record) for later use in training, or re-training, a classifier. With respect to the following limitations: wherein the at least one annotation record is stored in an immutable ledger; (Diedrich 0030 - 0033). Heim discloses storing annotation in a database, but does not disclose storing annotations in an electronic, blockchain, ledger. Diedrich discloses a machine learning image annotation marketplace system using immutable blockchain ledgers, and their associated hash values and verification procedures. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included storing data in blockchain ledgers, in accordance with the teaching of Diedrich, in order to provide an immutable ledger. With respect to the following limitations: evaluate the crowdsourced annotations based on, for each crowdsourced annotations, a corresponding characteristic of the crowdsourced annotation, a corresponding time metric of entering the respective crowdsourced annotation, and a corresponding accuracy of the respective crowdsourced annotation; (Welinder 0013, 0015, 0025, 0027, 0036, 0040, 0072, 0073). Heim discloses providing monetary rewards (i.e. a value) to annotators based on a time element, but does not disclose determining the value based on characteristic of the annotation, a time metric, and an accuracy of the annotation. Welinder teaches a system and method for rewarding crowdsourced annotators to compensate for labeling source data to be used in training a machine learning algorithm in a variety of applications including medical diagnosis. Welinder teaches generating rewards based on accuracy, the difficulty of the source data (i.e. characteristics of the crowdsourced annotation), and the time to make the annotation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included rewards for annotations based on difficulty, accuracy and time, in accordance with the teaching of Welinder, in order to provide an incentive to annotators. Claim 22, 23 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Heim et al.: “Large-scale medical image annotation with crowd-powered algorithms”; 8 September, 2018; in view of Welinder et al.: (US PGPUB 2013/0346356 A1). CLAIMS 22 and 23 Heim discloses a collaborative annotation system, the collaborative annotation system comprising: an electronic processor; (Heim Section 1 Introduction; Section 3.2 Architecture, Figure 3); configured to: access at least one annotation record, wherein the at least one annotation record is based on crowdsourced annotations obtained for a collaborative annotation project associated with a set of medical images, and generate training data based on the at least one annotation record; train a machine learning model using the training data, wherein the machine learning model performs a medical image analysis function; (Heim 2.1 Annotation). Heim discloses obtaining annotations from individual crowdsourced workers for a set or medical images, aggregating and storing the annotations and using the aggregated annotations to generate training data used to train or retrain a medical image classifier. With respect to the following limitations: wherein the at least one annotation record is associated with a value metric that is based on at least one of a characteristic of the crowdsourced annotation; or an accuracy of the crowdsourced annotations; (Welinder 0013, 0015, 0025, 0027, 0036, 0040, 0072, 0073). Heim discloses providing monetary rewards (i.e. a value) to annotators based on a time element, but does not disclose determining the value based on a contribution of the annotation, and an accuracy of the annotation. Welinder teaches a system and method for rewarding crowdsourced annotators to compensate for labeling source data to be used in training a machine learning algorithm in a variety of applications including medical diagnosis. Welinder teaches generating rewards based on accuracy, the difficulty of the source data (i.e. characteristics of the crowdsourced annotation), and the time to make the annotation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included rewards for annotations based on difficulty, accuracy and time, in accordance with the teaching of Welinder, in order to provide an incentive to annotators.. CLAIM 26 Heim discloses a collaborative annotation system, the collaborative annotation system comprising: an electronic processor; (Heim Section 1 Introduction; Section 3.2 Architecture, Figure 3); configured to: obtain crowdsourced annotations for the collaborative annotation project from a dispersed group of annotators, (Heim 2.1 & 2.2 Data Annotation, 3.4 Detection, 3.5 Refinement, 4.2 Expert Annotations); evaluate the crowdsourced annotations to determine an annotation contribution to the crowdsourced annotations for each annotator included in the dispersed group of annotators; and an annotation accuracy related to the crowdsourced annotations, (Heim 2.3 Comparison, 4 Experiments, 4.3 Evaluation); and generate a digital reward for at least one annotator included in the dispersed group of annotators; (Heim 2.3 Comparison to Experts; 3.1 Overview, 3.2 Architecture) – disclosing determining rewards for providing annotations. Heim discloses a web-based system and method for creating and distributing annotation tasks with a web application for medical images. Heim discloses obtaining medical image annotations using “common crowdsourcing platforms” and techniques. A large-scale crowd of workers (i.e. annotators) can access annotation tasks related to medical images with a web application, and provide annotations based on a visual evaluation of the image using predetermined selections the workers can choose from. Annotations are evaluated against experts or reference annotations, - i.e. for accuracy - and a digital reward is generated for the annotator. Heim discloses using the “Amazon Mechanical Turk” (MTurk) platform for crowdsourcing. MTurk inherently includes digital rewards. With respect to the following limitations: generate a digital reward for at least one annotator included in the dispersed group of annotators based on a corresponding annotation contribution for the at least one annotator and the annotation accuracy related to the crowdsourced annotations; (Welinder 0013, 0015, 0025, 0027, 0036, 0040, 0072, 0073). Heim discloses providing monetary rewards (i.e. a value) to annotators based on a time element, but does not disclose determining the value based on a contribution of the annotation, and an accuracy of the annotation. Welinder teaches a system and method for rewarding crowdsourced annotators to compensate for labeling source data to be used in training a machine learning algorithm in a variety of applications including medical diagnosis. Welinder teaches generating rewards based on accuracy, the difficulty of the source data (i.e. characteristics of the crowdsourced annotation), and the time to make the annotation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing data of the claimed invention, to have modified the annotation system of Heim so as to have included rewards for annotations based on difficulty, accuracy and time, in accordance with the teaching of Welinder, in order to provide an incentive to annotators. Response to Arguments Applicant's arguments filed 9 October, 2025 have been fully considered but they are not persuasive. The U.S.C. 101 Rejection Applicant argues by stating: “A claim with limitation(s) that cannot practically be performed in the human mind does not recite a mental process.”; and quotes from the MPEP which states: “Claims do not recite a mental process when they do not contain limitations that can practically be performed in the human mind, for instance when the human mind is not equipped to perform the claim limitations. Initially, Examiner notes that these statements do not have the same meaning. The MPEP limits mental process claims to those that contain limitations that can practically be performed in the human mind: “In contrast, claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions.” (Id) The MPEP does not require that ALL claim limitations be performed in the human mind, excluding “apply it” limitations and conventional computer functions such as receiving, transmitting, storing and displaying. Applicant’s statement at least implies that a claim should not be considered as being directed to a mental process if ANY limitation cannot be performed mentally. In particular, Applicant asserts that “generating hash information” and “generating a block based on the hash information . . . and adding the block to a blockchain”, relative to Claims 1 and 16; and “store in an immutable ledger”, relative to Claim 21; cannot be performed mentally. Examiner does not dispute Applicant’s assertion; nonetheless generating hashes, generating blocks and adding them to a blockchain are generic computer functions. Applicant did not invent the routine and conventional blockchain techniques relied on; rather the invention merely “leverages blockchain for medical imaging”. Applicant does not dispute that receiving and evaluating annotations can be performed mentally. The U.S.C. 102 Rejection Applicant argues that Heim does not disclose generating hash information and blockchain/ immutable ledger techniques. Examiner agrees. However, on further search and consideration a new grounds of rejection in view of Diedrich is made herein. Similarly, Applicant asserts that Heim does not disclose evaluating annotations based on a characteristic, a time or accuracy. Examiner agrees. However, on further search and consideration a new grounds of rejection in view of Welinder is made herein. CONCLUSION The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US PGPUB 2020/0364665 A1 to Shapiro discloses a system and method for obtaining crowdsourced annotations for training a machine learning classifier, including determining rewards based on the difficulty of the annotation task. US PGPUB 2022/0366250 A1 to Kim et al. discloses rewarding and distributing labeling work based on the difficulty of the task. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to John A. Pauls w
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §101, §103
Oct 09, 2025
Response Filed
Dec 08, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586676
IMAGE INTERPRETATION MODEL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12586668
System and Method for Patient Care Improvement
2y 5m to grant Granted Mar 24, 2026
Patent 12567483
AUTOMATED LABELING OF USER SENSOR DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12548670
EMERGENCY MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12548664
ADAPTIVE CONTROL OF MEDICAL DEVICES BASED ON CLINICIAN INTERACTIONS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
76%
With Interview (+27.5%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 829 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month