Prosecution Insights
Last updated: April 17, 2026
Application No. 19/022,275

METHOD AND SYSTEM FOR AI-BASED EVALUATION OF GAME ANIMALS

Non-Final OA §101§103§112
Filed
Jan 15, 2025
Examiner
SOMERS, MARC S
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
364 granted / 563 resolved
+9.7% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
36 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 2, 10, 11 are objected to because of the following informalities: Claim 2 recites the acronym “IR” without first defining it. Claims 10 and 11 recites “the permissioned blockchain” where it is unclear if this phrase is meant to be similar to the phrase in claim 9 of “a permissioned blockchain ledger”. In other words, is the permissioned blockchain ledger different from the permissioned blockchain? If they are meant to be different than no action is required; if they are meant to be construed similarly, the Examiner recommends conforming to a standard naming scheme for that element. Claim 11 recites the acronym “NFT” without first defining it. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10 and 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "the permissioned blockchain" in the body of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim 10 depends upon claim 1 which makes no mention of a blockchain. The Examiner notes that claim 9 recites “a permissioned blockchain ledger”. For purposes of compact prosecution, the Examiner is construing claim 10 to depend upon claim 9 in order to provide antecedent basis support to claim 10. With regards to claim 11, this claim depends upon claim 10 and inherits the same deficiencies as claim 10 as discussed above. Therefore, claim 11 is rejected for similar reasons as discussed above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With regard to claim 1: Step 2A, Prong One: The claim recites the following limitations which are drawn towards an abstract idea: A system for an automated evaluation of a game animal based on sensory animal-related data, comprising: derive the animal profile sensory data from the evaluation request; parse the animal profile sensory data to derive a plurality of key classifying features (recites mental process steps of evaluating information including using sensory data such as visual information to detect key features for further analysis, for example, seeing an animal and evaluating what features of the animal to utilize for further analysis); generate at least one classifier feature vector based on the plurality of key classifying features and the local historical animal evaluations'-related data (recites mental process steps of evaluating and determining (via physical measurements) or educated guessing via visual observations various values for the features of the animal being observed); and generate animal scoring data for the at least one user-entity node based on the at least one animal scoring parameter (recites mental process steps of evaluating and forming a judgement/decision that can involve mathematical calculations to derive a score/value reflective of the feature values of the animal/data being evaluated). As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below. Step 2A, Prong Two: The following limitations have been identified as being additional elements as discussed below. a processor of an animal evaluation server (AES) node configured to host a machine learning (ML) module and connected to at least one user-entity node over a network; and a memory on which are stored machine-readable instructions that when executed by the processor, cause the processor to (recites generic hardware elements intended to be used for generic hardware activities such as processor and memory on a server (server node) and client device (user-entity node) that utilizes software/program (machine learning module) to perform the judicial exception which amounts to apply it-type limitations that use the computer as a tool to perform the judicial exception, see MPEP 2106.05(f)): receive an evaluation request comprising animal profile sensory data from the at least one user-entity node (recites mere data gathering steps similar to insignificant extrasolution activity of receiving information over a network, see MPEP 2106.05(g)); query a local animal evaluation database to retrieve local historical animal evaluations'-related data based on the plurality of key classifying features (recites insignificant extrasolution activity of transmitting and receiving information, see MPEP 2106.05(g)); and provide the at least one classifier feature vector to the ML module configured to generate an animal evaluation predictive model for producing at least one animal scoring parameter (recites apply it type limitations of using a computer as a tool to implement the abstract idea, see MPEP 2106.05(f)); As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional elements relate to retrieval of data and utilizing generic computer components so use the computer as tool to implement the abstract idea. Step 2B: Below is the analysis of the claims: a processor of an animal evaluation server (AES) node configured to host a machine learning (ML) module and connected to at least one user-entity node over a network; and a memory on which are stored machine-readable instructions that when executed by the processor, cause the processor to (recites generic hardware elements intended to be used for generic hardware activities such as processor and memory on a server (server node) and client device (user-entity node) that utilizes software/program (machine learning module) to perform the judicial exception which amounts to apply it-type limitations that use the computer as a tool to perform the judicial exception, see MPEP 2106.05(f)): receive an evaluation request comprising animal profile sensory data from the at least one user-entity node (recites mere data gathering steps, i.e. well-understood, routine, and conventional activity of receiving information over a network, see MPEP 2106.05(d)); query a local animal evaluation database to retrieve local historical animal evaluations'-related data based on the plurality of key classifying features (recites well-understood, routine, and conventional activity of transmitting and receiving information, see MPEP 2106.05(d)); and provide the at least one classifier feature vector to the ML module configured to generate an animal evaluation predictive model for producing at least one animal scoring parameter (recites apply it type limitations of using a computer as a tool to implement the abstract idea, see MPEP 2106.05(f)); As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements relate to retrieval of data and utilizing generic computer components so use the computer as tool to implement the abstract idea. With regard to claim 2, this claim recites wherein the animal profile sensory data comprising any of:(a) live video data;(b) imaging data;(c) IR emission data; and a combination of (a), (b) and (c) (recites technological environment limitations describing the data type formats, see MPEP 2106.05(h)). With regard to claim 3, this claim recites wherein the machine-readable instructions that when executed by the processor, cause the processor to generate the animal scoring data based on a number and size of horns or antlers (recites field of use limitations describing particular types of features that are utilized in the evaluation, see MPEP 2106.05(h)). With regard to claim 4, this claim recites wherein the machine-readable instructions that when executed by the processor, cause the processor to retrieve pre-stored data (recites insignificant extrasolution activity of retrieving information which amounts to well-understood, routine, and conventional activity of retrieving information, see MPEP 2106.05(d)) comprising the number and the size of the horns or the antlers for this type of the animal (recites field of use limitations describing particular types of features that are utilized in the evaluation, see MPEP 2106.05(h)). With regard to claim 5, this claim recites wherein the machine-readable instructions that when executed by the processor, cause the processor to retrieve remote historical animal evaluations'-related data from at least one remote database based on the plurality of key classifying features and the animal profile sensory data, wherein the remote historical animal evaluations'-related data is collected at other remote hunting sites (recites insignificant extrasolution activity of retrieving information which amounts to well-understood, routine, and conventional activity of retrieving information, see MPEP 2106.05(d)). With regard to claim 6, this claim recites wherein the machine-readable instructions that when executed by the processor, cause the processor to generate the at least one classifier feature vector based on the plurality of key classifying features and the local historical animal evaluations'-related data combined with the remote historical animal evaluations'-related data (recites mental process steps of combining various data pieces together such as remembering various measurement data that was observed/evaluated as well as known local area values/observations from recent past observations and observations from not the immediate area, e.g. knowing measurement values of animals from surrounding area but not the immediate local area). With regard to claim 7, this claim recites wherein the machine-readable instructions that when executed by the processor, cause the processor to continuously monitor the animal profile sensory data to determine if at least one value of animal-related parameters deviates from a previous value of an animal-related parameter value by a margin exceeding a pre-set threshold value (recites mental process steps of periodically evaluating current observations to determine if the calculated values have exceed some measurable difference). With regard to claim 8, this claim recites wherein the machine-readable instructions that when executed by the processor, cause the processor to, responsive to the at least one value of the animal-related parameters deviating from the previous value of the animal-related parameter by the margin exceeding the pre-set threshold value, generate an updated classifier feature vector and generate the animal scoring data based on at least one animal scoring parameter produced by the animal evaluation predictive model in response to the updated classifier feature vector (recites mental process steps of updating the respective guessed/measured values used to evaluate/score the animal and then performing the respective mental process steps to calculate/score the animal based on the newly observed data). With regard to claim 9, this claim recites wherein the machine-readable instructions that when executed by the processor, further cause the processor to record the animal scoring data and at least one corresponding animal scoring parameter along with the animal profile sensory data on a permissioned blockchain ledger (recites insignificant extrasolution activity of storing information in memory which amounts to well-understood, routine, and conventional activity of storing information in memory/transmitting information over a network, see MPEP 2106.05(d), where the storage is based on particular data structure protocol such as blockchain which amounts to technological environment limitations describing the underlying data structure and its respective access protocols, see MPEP 2106.05(h)). With regard to claim 10, this claim recites wherein the machine-readable instructions that when executed by the processor, further cause the processor to retrieve the at least one animal scoring parameter from the permissioned blockchain responsive to a request from at least one user-entity node onboarded onto the permissioned blockchain (recites insignificant extrasolution activity of accessing information from memory/retrieving information over the network which amounts to well-understood, routine, and conventional activity of accessing information from memory/retrieving information over the network where the particular data structure being used is a blockchain which amounts to technological environment limitations describing the underlying data structure, see MPEP 2106.05(h)). With regard to claim 11, this claim recites wherein the machine-readable instructions that when executed by the processor, further cause the processor to execute a smart contract to generate at least one NFT including the animal scoring data corresponding to the animal profile sensory data on the permissioned blockchain (recites field of use limitations describing, at a high-level of generality, the usage of particular types of data/digital asset in the system, see MPEP 2106.05(h)). With regard to claims 12-19, these claims are substantially similar to claims 1 and 3-9 respectively and are rejected for similar reasons as discussed above. With regard to claim 20, this claim is substantially similar to claim 1 and is rejected for similar reasons as discussed above. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed towards signals per se. Claim 20 recites a non-transitory computer-readable medium which the specification at paragraph [0059] which provides examples of what entails a non-transitory computer-readable medium which includes open-ended definition which does not appear to exclude signals and carrier waves. As such, under broadest reasonable interpretation, the claim term is directed towards signals and is rejected accordingly as being directed towards signals per se. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 12-14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mcfarland [US 2024/0029290 A1] in view of Demarais et al [US 2011/0311109 A1]. With regard to claim 1, Mcfarland teaches a system for an automated evaluation of a game animal based on sensory animal- related data, comprising: a processor of an animal evaluation server (AES) node configured to host a machine learning (ML) module and connected to at least one user-entity node over a network; and a memory on which are stored machine-readable instructions that when executed by the processor, cause the processor to (see paragraphs [0028] and [0029]; the system can utilize various computer components including a user device such as a smart device (102) that communicates with a server (116) in the cloud where the various computer devices have storage and use computer processors to process data, see Figure 8 for an example of the server using a GPU; “The server 116 on the cloud platform 104 may comprise artificial intelligence (AI) technology that analyzes each submitted photo, whether it is captured via the camera 114 or uploaded from the smart device storage 115. The AI can analyze each section of the antler to produce a score card as described below” [0029]): receive an evaluation request comprising animal profile sensory data from the at least one user-entity node (see paragraph [0036] and [0030]-[0031]; the user can interact with their smart device via an application and be able to select a score icon for respective sensory data/image data that was transmitted to the cloud system; “In particular, the score icon may produce a second user interface that allows the user to select the species of the uploaded or captured image. At this point, the smart device 102 communicates with the server 116 the cloud platform 104 and the cloud platform 104, where the image is analyzed.” [0031]); derive the animal profile sensory data from the evaluation request; (see paragraphs [0031], [0033], [0036]; the system is able to receive the respective request from the user’s device and be able to derive/identify the respective image(s) to be analyzed); query a local animal evaluation database to retrieve with various evaluations-related data: “At step 210, AI on the cloud platform analyzes the image and compares it to stored data. That is, data that includes numerous antler measurements.” [0036]); and provide the at least one classifier feature and generate animal scoring data for the at least one user-entity node based on the at least one animal scoring parameter (see paragraphs [0032]-[0033] and [0036]; the system can generate scoring data for the respective user based on the identified species/scoring parameter). Mcfarland teaches means for a machine learning module to analyze image information but does not appear to explicitly teach: parse the animal profile sensory data to derive a plurality of key classifying features; query a local animal evaluation database to retrieve local historical animal evaluations'-related data based on the plurality of key classifying features; generate at least one classifier feature vector based on the plurality of key classifying features and the local historical animal evaluations'-related data; and provide the at least one classifier feature vector to the ML module… Demarais teaches query a local animal evaluation database to retrieve local historical animal evaluations'-related data based on the plurality of key classifying features (see paragraphs [0038] and [0021]; the system can query/retrieve local evaluation data for the respective identified species). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the previously stored data in the database of Mcfarland by utilizing data for respective species/sub-species for their respective local regions as taught by Demarais in order to utilize scientific knowledge that various anatomical features of animals/mammals vary significantly amongst other related species/sub-species in regions and that utilizing the local data for the region associated with the evaluation request would help so that the system utilizes the most relevant data when analyzing the input data thereby ensuring that the respective calculations are more accurate since it takes into account where the respective data is from and uses the variations of the respective local region (see Demarais, paragraph [0008]). Mcfarland in view of Demarais teach parse the animal profile sensory data to derive a plurality of key classifying features (see Mcfarland, paragraph [0037]; the system can analyze the phots and extract information, i.e. feature extraction, for processing including species detection and scoring the respective animal); generate at least one classifier feature vector based on the plurality of key classifying features and the local historical animal evaluations'-related data; and provide the at least one classifier feature vector to the ML module… (see Demarais, paragraphs [0038] and [0043]; Mcfarland, paragraph [0033] and Figure 4; the system can utilize the classifying features of the object/animal and provide it to a machine-learning module to allow it to classify the respective species of the animal as well as identify information about respective features for scoring; “More particularly, in one embodiment, the server 116 may comprise numerous AI systems (e.g., ten) where three AI systems are used to classify (determine the species) and crop the antler from the image to be scored.” Mcfarland, paragraph [0033]). With regard to claim 2, Mcfarland in view of Demarais teach wherein the animal profile sensory data comprising any of:(a) live video data;(b) imaging data;(c) IR emission data; and a combination of (a), (b) and (c) (see Mcfarland, paragraph [0031]; the system can utilize a camera on a device to capture imaging data to be used by the system). With regard to claim 3, Mcfarland in view of Demarais teach wherein the machine-readable instructions that when executed by the processor, cause the processor to generate the animal scoring data based on a number and size of horns or antlers (see Demarais, paragraph [0051]; Mcfarland, paragraphs [0005], [0026], [0033]; the scoring system can utilize well-known and established scoring schemes that consider size and number/points/tine of antlers). With regard to claim 4, Mcfarland in view of Demarais teach wherein the machine-readable instructions that when executed by the processor, cause the processor to retrieve pre-stored data comprising the number and the size of the horns or the antlers for this type of the animal (see Demarais, paragraphs [0038], [0043], [0051]; Mcfarland, paragraphs [0005], [0026], [0033]; the scoring system can utilize well-known and established scoring schemes that consider size and number/points/tine of antlers where the pre-stored data can include regional values for that species). With regard to claims 12-14, these claims are substantially similar to claims 1 and 3-4 respectively and are rejected for similar reasons as discussed above. With regard to claim 20, this claim is substantially similar to claim 1 and is rejected for similar reasons as discussed above. Claims 5, 6, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mcfarland [US 2024/0029290 A1] in view of Demarais et al [US 2011/0311109 A1] in further view of Monk et al [US 11,373,427]. With regard to claim 5, Mcfarland in view of Demarais teach all the claim limitations of claim 1 as discussed above. Mcfarland in view of Demarais do not appear to explicitly teach wherein the machine-readable instructions that when executed by the processor, cause the processor to retrieve remote historical animal evaluations'-related data from at least one remote database based on the plurality of key classifying features and the animal profile sensory data, wherein the remote historical animal evaluations'-related data is collected at other remote hunting sites. Monk teaches wherein the machine-readable instructions that when executed by the processor, cause the processor to retrieve remote historical animal evaluations'-related data from at least one remote database based on the plurality of key classifying features and the animal profile sensory data, (see col 20, lines 17-51; and col 25, line 64 through col 26, line 17; and col 28, line 58 through col 29, line 13; the system can retrieve and collect various data from the remote hunting sites as means to help forecast or predict animal sightings). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the regional data collection scheme of Mcfarland in view of Demarais by collecting and aggregating data from a plurality of remote hunting sites as taught by Monk in order to utilize various remote monitoring sites that capture information about wild animals as means to determine the specific regional values and be able to collect/aggregate the data from those remote sites so that the system has the most up-to-date information about regional information/scoring/sizes of the respective species thus helping the system to be more accurate with the users of the system by basing the system’s calculation on up-to-date information versus older information which might be stale and unreliable. Mcfarland in view of Demarais in further view of Monk teach wherein the remote historical animal evaluations'-related data is collected at other remote hunting sites (see Monk, see col 20, lines 17-51; and col 25, line 64 through col 26, line 17; and col 28, line 58 through col 30, line 32; see Demarais, paragraphs [0038] and [0021]; the system can monitor identified animals at remote hunting sites and utilize that information as the respective regional information that can be collected and used the system). With regard to claim 6, Mcfarland in view of Demarais in further view of Monk teach wherein the machine-readable instructions that when executed by the processor, cause the processor to generate the at least one classifier feature vector based on the plurality of key classifying features and the local historical animal evaluations'-related data combined with the remote historical animal evaluations'-related data (see Monk, see col 20, lines 17-51; and col 25, line 64 through col 26, line 17; and col 28, line 58 through col 30, line 32; see Demarais, paragraphs [0038], [0021], and [0043]; Mcfarland, paragraph [0033]; the system can utilize the classifying features of the object/animal and provide it to a machine-learning module to allow it to classify the respective species of the animal as well as identify information about respective features for scoring). With regard to claims 15 and 16, these claims are substantially similar to claims 5 and 6 respectively and are rejected for similar reasons as discussed above. Claims 7, 8, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mcfarland [US 2024/0029290 A1] in view of Demarais et al [US 2011/0311109 A1] in further view of Vallez Enano et al [US 2024/0171845 A1] With regard to claim 7, Mcfarland in view of Demarais teach all the claim limitations of claim 1 as discussed above. Mcfarland in view of Demarais do not appear to explicitly teach wherein the machine-readable instructions that when executed by the processor, cause the processor to continuously monitor the animal profile sensory data to determine if at least one value of animal-related parameters deviates from a previous value of an animal-related parameter value by a margin exceeding a pre-set threshold value. Vallez Enano teaches wherein the machine-readable instructions that when executed by the processor, cause the processor to continuously monitor the animal profile sensory data to determine if at least one value of animal-related parameters deviates from a previous value of an animal-related parameter value by a margin exceeding a pre-set threshold value (see paragraphs [0044], [0068] and [0105]-[0108]; the system is able to monitor the object and be able to determine if the respective current parameter values deviate from the previous parameter values by some threshold). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify image monitoring process of Mcfarland in view of Demarais by continuing to monitor information tracking or viewing the respective object as taught by Vallez Enano in order to update or correct the classification/identification of detected objects to help improve the reliability and accuracy of the system for video identification of objects/animals by ensuring that the system doesn’t overcompensate or undercompensate various feature values based on the orientation and other visual features of the input image/frame(s) by allowing for evaluation of the object/animal from multiple different frames/images and detecting when there is a threshold deviation between first measured/estimated values and the current measured/estimated values. With regard to claim 8, Mcfarland in view of Demarais in further view of Vallez Enano teach wherein the machine-readable instructions that when executed by the processor, cause the processor to, responsive to the at least one value of the animal-related parameters deviating from the previous value of the animal-related parameter by the margin exceeding the pre-set threshold value, generate an updated classifier feature vector and generate the animal scoring data based on at least one animal scoring parameter produced by the animal evaluation predictive model in response to the updated classifier feature vector (see Vallez Enano, paragraphs [0044], [0068] and [0105]-[0108]; see Mcfarland, paragraphs [0032]-[0033] and [0036]; the system is able to monitor the object and be able to determine if the respective current parameter values deviate from the previous parameter values by some threshold where the updated classification feature vector can be utilized to create/update the respective score). With regard to claims 17 and 18, these claims are substantially similar to claims 7 and 8 respectively and are rejected for similar reasons as discussed above. Claims 9-11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mcfarland [US 2024/0029290 A1] in view of Demarais et al [US 2011/0311109 A1] in further view of Hufnagl-Abraham [US 2024/0062628 A1]. With regard to claim 9, Mcfarland in view of Demarais teach all the claim limitations of claim 1 as discussed above. Mcfarland in view of Demarais teach wherein the machine-readable instructions that when executed by the processor, further cause the processor to record the animal scoring data and at least one corresponding animal scoring parameter along with the animal profile sensory data Mcfarland in view of Demarais do not appear to explicitly teach wherein the machine-readable instructions that when executed by the processor, further cause the processor to record the animal scoring data and at least one corresponding animal scoring parameter along with the animal profile sensory data on a permissioned blockchain ledger. Hufnagl-Abraham teaches a permissioned blockchain ledger (see paragraph [0060]; the system can store records in a blockchain that is permissioned). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the cloud storage scheme of Mcfarland in view of Demarais by using permissioned blockchain as taught by Hufnagl-Abraham in order to provide a non-public storage scheme that allows members, e.g. users with the same mobile application (see Mcfarland paragraph [0037]), to able to interact with the stored data while providing means for data protection and integrity by providing digital verification of the data including who actually owns the record or logged information as well as ensuring the data isn’t able to be changed (i.e. data security). With regard to claim 10, Mcfarland in view of Demarais in further view of Hufnagl-Abraham teach wherein the machine-readable instructions that when executed by the processor, further cause the processor to retrieve the at least one animal scoring parameter from the permissioned blockchain responsive to a request from at least one user-entity node onboarded onto the permissioned blockchain (see Hufnagl-Abraham, paragraph [0060]; see Mcfarland, paragraphs [0032]-[0033] and [0036]; the system can store records in a blockchain that is permissioned and has means to retrieve previously uploaded data; (see 35 USC 112 rejections for claim 10 being construed to be dependent upon claim 9 in order to provide antecedent basis support)). With regard to claim 11, Mcfarland in view of Demarais in further view of Hufnagl-Abraham teach wherein the machine-readable instructions that when executed by the processor, further cause the processor to execute a smart contract to generate at least one NFT including the animal scoring data corresponding to the animal profile sensory data on the permissioned blockchain (see Mcfarland, paragraph [0033]; Hufnagl-Abraham, paragraphs [0014], [0015], [0056], and [0059]-[0061] and [0064]; the system can utilize smart contract as means to generate NFTs associated with hunting events/scores and store the respective NFT in the blockchain). With regard to claim 19, this claim is substantially similar to claim 9 and is rejected for similar reasons as discussed above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gleim et al [US 2014/0122352 A1] teaches at paragraphs [0028]-[0029] and [0032]-[0035] using artificial intelligence to measure, count, and score an animal from a photo as well as including means to recognize and distinguish between different animals Harty et al [US 2022/0284725 A1] teaches at paragraph [0023] discusses how the system uses a feature vector of each animal that is computed/generated based on images of the animal. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARC S SOMERS/Primary Examiner, Art Unit 2159 11/13/2025
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Nov 13, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579099
CONTROL LEVEL TAGGING METHOD AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12561288
METHOD AND APPARATUS TO VERIFY FILE METADATA IN A DEDUPLICATION FILESYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554681
SYSTEM AND METHOD OF UNDOING DATA BASED ON DATA FLOW MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12541502
METHODS AND APPARATUSES FOR IMPROVING PROCESSING EFFICIENCY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12530365
SYSTEMS AND METHODS FOR A MACHINE LEARNING FRAMEWORK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+34.6%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month