Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/30/2025 has been entered.
Status of Claims
This action is in reply to the amendments and remarks filed on 12/30/2025.
Claims 1-4 and 6-9 are pending.
Claims 1-3 and 6-7 have been amended.
Claim 5 has been canceled.
Response to Arguments
Applicant’s arguments, with respect to the claim objections of claim 7 under, have been considered and are persuasive. The previous objections have been withdrawn.
Applicant’s arguments, with respect to the claim rejections of claim(s) 1 under 35 U.S.C. 112(b) antecedent issues, have been considered and are persuasive. The previous antecedent rejections have been withdrawn. However, the previous 112(b) rejections regarding 112(f) interpretations were not addressed and are therefore maintained.
Applicant’s arguments, with respect to the rejection(s) of claim(s) 1-4 and 6-9 under 35 U.S.C. 101, have been considered but they are not persuasive. The applicant argues that the amended claims recite “using such pet insurance claim data as training data makes the relationship between the facial image and the disease of the animal depicted in that image more accurate…and additionally incorporates the configuration of a processor for calculating insurance premiums…thereby reciting a practical application of, and significantly more than, the purported abstract idea”, and therefore overcome the 101 rejections. The examiner respectfully disagrees.
The amendments and use in the claim do not operate to overcome the previous 101 abstract idea rejection. The training operations and computer components are determined to be recited at a high level of generality and amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; and the predicting, deep learning model, and premium calculation configuration are determined to be recited at a high level of generality, generally link the use of the judicial exception to a particular technological environment or field of use. See 35 U.S.C 101 section for full, updated analysis of claim limitations necessitated by applicant amendments.
Applicant’s arguments, with respect to the rejection(s) of claims 1, 2, and 6 under 35 U.S.C. 103, have been considered but are not persuasive. The applicant argues that no reference teaches the amended claim limitations of claims 1 and 6 that state “the deep learning by using facial still images of animals excluding humans and a presence or absence of a contracted disease within a predetermined period from a time of imaging the animals which is obtained from pet insurance claim records for that animal, as training data”, since the references do not teach “the use of pet insurance claim data and the processor for calculating insurance premiums”. The examiner respectfully disagrees.
Due to applicant’s amendments, Gibbs has been found to teach specific claim elements in combination with Hayward. Regardless, Gibbs, paragraphs 0055, 0065, 0070, 0075, 0092-0093, and Figs. 7 and 15 teaches “ML engine identifies co-occurring behavioral anomalies…[and] match[es] the pattern to the symptomology expert system resulting in a predictive diagnosis of epilepsy”, and automatically contacts a vet. The system trains the ML engine with the “addition of a treatment to the pet's record [from the vet], and specifically a treatment targeting the co-occurring anomalous behavior generates a new rule to provide comparative analysis of the symptomology at future occurrences of the co-occurring anomalies to the first recorded pattern of the co-occurring anomalies (predicts occurrence of future disease)”. Further the model is trained based on “real-time” or historic collected data from a certain time including the video data of a dog’s head and face (pet…records); and Fig. 13 further depicts data including a still facial image from the front.
Further Hayward paragraphs 0021-0022, 0059, 0102, 0104, 0109, and 0112 teach using “deep learning” for predicting “risk variables, [in] an initial risk assessment may be made, which may include a scaled risk score or other suitable indicator to quantify the risk of insuring the user given the likelihood, for example (in the case of a life or health insurance policy) of the various medical-related conditions occurring within some future time horizon that coincides with the insurance coverage”; and “information collected and/or analyzed may pertain to domesticated animals (e.g., dogs, cats, thoroughbreds, etc.) and/or livestock” and be in the form of “insurance records”.
See 35 U.S.C 103 section for full mapping of claim limitations necessitated by applicant amendments.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an interface unit that receives” in claims 1-2
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Further, the limitations of “an interface unit that receives” for the above corresponding operations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, however, applicant’s page 6, lines 1-12 recite sufficient structure stating a “The assessment means of the present invention includes a learned model…Artificial Intelligence (AI) is preferable as the learned model. Artificial Intelligence (AI) refers to software or a system that uses a computer to mimic intellectual work performed by human brains, and specifically refers to a computer program or the like”.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-4 and 6-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1-2’s limitations of “an interface unit that receives” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Applicant’s specification pages 10-11 state “interface unit (communication unit) 30 includes a reception means 31 and an output means 32”; wherein page 5, lines 10-23 recite “reception means…is a means for receiving an input of a facial image…An image receiving method may be any method such as scanning, input of image data, and transmission”; page 8, lines 13-15 state “output by displaying, for example on the screen of a personal computer”; and Fig. 3 depicts the “interface unit” of a server. However, it is unclear to the examiner if the said interface unit is a set of instructions contained within a memory or if it is a separate and distinct hardware component that execute instructions to accomplish the claimed operations, and further if the “screen” is for the interface unit or the “terminal” being “a personal computer” on page 9, line 10. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claims so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4 and 6-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 2, and 6 are respectively drawn to a system and method, hence each falls under one of four categories of statutory subject matter (Step 1). Nonetheless, the claims are directed to a judicially recognized exception of an abstract idea without significantly more.
Claims 1, 2, and 6 recite the following, or analogous, limitations “an interface unit that receives input of a facial still image of an animal excluding a human [preparing a facial still image of an animal excluding a human claim 6]; and the…model for predicting and determining [and inputting the facial image to a deep learning model and outputting a prediction by…regarding claim 6] whether the animal may contract a disease in future from the facial still image of the animal input to the interface unit;…inputs a facial still image of an animal, and outputs a predictive assessment regarding whether the animal may contract a disease in future predetermined period”. These limitations, as claimed, under its broadest reasonable interpretation, can be evaluated in a human mind, except for the recitation of generic computer components. Other than reciting “a computer”, “to a computer including artificial intelligence as training data and making the artificial intelligence learn the data”, “a processor that predicts occurrence of future disease of the animal using a deep learning model”, “deep learning model”, “characterized in that the deep learning model performs learning by using facial still images of animals excluding humans and the presence or absence of a contracted disease within a predetermined period from the time of imaging the animals as training data; the deep learning model…”, and “and further comprising a premium calculation processor that calculates the animal's insurance premium using the predicted determination result output by the above processor using the deep learning model, and is capable of outputting the insurance premium for the animal, excluding humans applying for pet insurance”, to perform the exceptions, nothing in the claims preclude the steps from practically being performed in the human mind. For example, a human expert can:
mentally/with the aid of pen and paper an interface unit that receives input of a facial still image of an animal excluding a human [preparing a facial still image of an animal excluding a human claim 6] (e.g. by thinking of/writing out remembering a photo of a dog’s face),
mentally/with the aid of pen and paper and the…model for predicting and determining [and inputting the facial image to a…model and outputting a prediction by…regarding claim 6] whether the animal may contract a disease in future from the facial still image of the animal input to the interface unit (e.g. by thinking of/writing out a first calculation with parameters to segment the remembered photo of the dog’s face and calculate a probability that type of dog will contract an illness within a certain number of years),
mentally/with the aid of pen and paper …inputs a facial still image of an animal, and outputs a predictive assessment regarding whether the animal may contract a disease in future predetermined period (e.g. by thinking of/writing out a first calculation with parameters to segment the remembered photo of the dog’s face and calculate a probability that type of dog will contract an illness within a certain number of years).
Thus, the claims recite a mental process (Step 2A, Prong 1).
Claims 1, 2, and 6 include additional elements, “a computer”, “to a computer including artificial intelligence as training data and making the artificial intelligence learn the data”, “a processor that predicts occurrence of future disease of the animal using a deep learning model”, “deep learning model”, “characterized in that the deep learning model performs learning by using facial still images of animals excluding humans and the presence or absence of a contracted disease within a predetermined period from the time of imaging the animals as training data; the deep learning model…”, “and further comprising a premium calculation processor that calculates the animal's insurance premium using the predicted determination result output by the above processor using the deep learning model, and is capable of outputting the insurance premium for the animal, excluding humans applying for pet insurance”, however the recitations of these elements are at a high level of generality and adding the words “apply it” (or an equivalent) with the judicial exception (i.e., “to a computer including artificial intelligence as training data and making the artificial intelligence learn the data”, and “characterized in that the deep learning model performs learning by using facial still images of animals excluding humans and the presence or absence of a contracted disease within a predetermined period from the time of imaging the animals as training data; the deep learning model…”), or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (i.e., “a computer” and “a processor”, and “a premium calculation processor”) (see MPEP 2106.05(f)); and generally linking the user of the judicial exception to a particular technological environment or field of use (i.e., “predicts occurrence of future disease of the animal using a deep learning model” and “deep learning model”, “calculates the animal's insurance premium using the predicted determination result output by…using the deep learning model”) (see MPEP 2106.05(h)); and amount to mere data gathering, storing, or outputting which are forms of adding insignificant extra-solution activity to the judicial exception (i.e., “capable of outputting the insurance premium for the animal, excluding humans applying for pet insurance”) (see MPEP 2106.05(g)),. Hence, each of the additional limitations or in combination do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (Step 2A, Prong 2). The additional elements in the claim do not amount to significantly more than an abstract idea. Furthermore, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using “a computer”, “to a computer including artificial intelligence as training data and making the artificial intelligence learn the data”, “a processor that predicts occurrence of future disease of the animal using a deep learning model”, “deep learning model”, “characterized in that the deep learning model performs learning by using facial still images of animals excluding humans and the presence or absence of a contracted disease within a predetermined period from the time of imaging the animals as training data; the deep learning model…”, “and further comprising a premium calculation processor that calculates the animal's insurance premium using the predicted determination result output by the above processor using the deep learning model, and is capable of outputting the insurance premium for the animal, excluding humans applying for pet insurance”, to perform the steps of the independent claims amounts to no more than mere adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; generally linking the user of the judicial exception to a particular technological environment or field of use; and amount to mere data gathering, storing, or outputting which are forms of adding insignificant extra-solution activity to the judicial exception; as these cannot provide an inventive concept. (STEP 2B). As such, claims 1, 2, and 6 are not patent eligible.
Dependent claims 3-4 and 7-9 are also ineligible for the same reasons given with respect to claims 1, 2, and 6. The dependent claims describe additional mental processes:
mentally/with the aid of pen and paper wherein the input still image is an image obtained by imaging the face of the animal from the front (claims 3 and 7) (e.g. by mentally/writing out the remembered photo is of a dog’s face looking into the remembered camera)
mentally/with the aid of pen and paper inputting a facial image of an animal to be in coverage of insurance…and…determining an insurance fee of the animal in accordance with the output prediction of contraction of a disease (claims 4 and 8-9) (e.g. by mentally/writing out a second calculation for outputting an health insurance premium for the dog based on the first calculation’s probability)
Again, the dependent claims continued to cover the performance of the limitation in the mind as inherited from the independent claims (Step 2A, Prong 1). The dependent claim 4 and 8-9’s recitation of “an insurance fee calculation system” and “to the disease prediction system according to claim 1 [2 or 3]”, is recited at a high level and amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)), and do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (Step 2A, Prong 2). The additional element in the claims do not amount to significantly more than an abstract idea. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements to perform the steps of in the dependent claims and perform the steps of the claims amount adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; and these cannot provide an inventive concept. (STEP 2B). As such, dependent claims 3-4 and 7-9 additional elements or combination of elements do not amount to significantly more than an abstract idea nor provide any inventive concept, nor impose a meaningful limit to integrate the elements into a practical application or significantly more than the judicial exceptions; therefore, the dependent claims are not deemed patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4 and 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Gibbs et al (US Pub 20200381119) hereinafter Gibbs, in view of Leong et al (US Pub 20200159720) hereinafter Leong, in view of Hayward et al (US Pub 20210256615) hereinafter Hayward.
Regarding claim 1, Kojima teaches a disease prediction system comprising: an interface unit input of a facial still image of an animal excluding a human (paragraphs 0054-0055 and 0083-0089 teach a device’s “video camera” (interface unit) capturing a video and extracting a “still image” of an animal showing its nose and eyes);
a processor that predicts occurrence of disease of the animal using a deep learning model (paragraph 0181 teach a processor for performing the embodiments of the disclosure, including paragraphs 0054-0055 and 0083-0089 teaching of identifying a “degree of similarity using known image pattern recognition technologies such as, e.g., a Bayesian recognition method, subspace method, and neural network” between the extracted still image and stored image patterns of diseases);
and the deep learning model for predicting and determining whether the animal may contract a disease from the facial still image of the animal input to the interface unit (paragraphs 0054-0055 and 0083-0089 teach identifying a “degree of similarity using known image pattern recognition technologies such as, e.g., a Bayesian recognition method, subspace method, and neural network” between the extracted still image (from the facial still image of the animal input to the interface unit) and stored image patterns of diseases); characterized in that
the deep learning inputs a facial still image of an animal, and outputs a predictive assessment regarding whether the animal may contract a disease within predetermined period (paragraphs 0054-0055, 0083-0089, 0165, and Fig. 12 teach a neural network matching data for estimating the submitted animal disease for a specific time (predetermined period) of the still image extracted from the video).
While Kojima teaches utilizing neural network for image pattern recognition that is well known to requiring training on the type of data it is predicting, Kojima does not explicitly teach a processor that predicts occurrence of future disease of the animal using a deep learning model; and the deep learning model for predicting and determining whether the animal may contract a disease in future from the facial still image of the animal input to the interface unit…the deep learning by using facial still images of animals excluding humans and a presence or absence of a contracted disease within a predetermined period from a time of imaging the animals which is obtained from pet insurance claim records for that animal, as training data.
Gibbs teaches a processor that predicts occurrence of future disease of the animal using a deep learning model; and the deep learning model for predicting and determining whether the animal may contract a disease in future…the deep learning by using facial still images of animals excluding humans and a presence or absence of a contracted disease within a predetermined period from a time of imaging the animals which is obtained from pet records for that animal, as training data (paragraphs 0055, 0065, 0070, 0075, 0092-0093, and Figs. 7 and 15 teaches “ML engine identifies co-occurring behavioral anomalies…[and] match[es] the pattern to the symptomology expert system resulting in a predictive diagnosis of epilepsy”, and automatically contacts a vet. The system trains the ML engine with the “addition of a treatment to the pet's record [from the vet], and specifically a treatment targeting the co-occurring anomalous behavior generates a new rule to provide comparative analysis of the symptomology at future occurrences of the co-occurring anomalies to the first recorded pattern of the co-occurring anomalies (predicts occurrence of future disease)”. Further the model is trained based on “real-time” or historic collected data from a certain time including the video data of a dog’s head and face (pet…records); and Fig. 13 further depicts data including a still facial image from the front).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Gibbs’ teaching of dog disease predicting from input data including video images in Kojima’s teaching of animal still image matching via a neural network for disease prediction in order to “more accurately prescribe treatment protocols based on the pet's historical, clinical and predictive diagnostic information” (Gibbs, paragraph 0010).
Further the combination at least implies a processor that predicts occurrence of future disease of the animal using a deep learning model; and the deep learning model for predicting and determining whether the animal may contract a disease in future from the facial still image of the animal input to the interface unit, and which is obtained from pet insurance claim records for that animal, as training data; however Hayward teaches a processor that predicts occurrence of future disease of the animal using a deep learning model; and the deep learning model for predicting and determining whether the animal may contract a disease in future from the facial still image of the animal input to the interface unit…[and] which is obtained from pet insurance claim records for that animal, as training data (paragraphs 0021-0022, 0059, 0102, 0104, 0109, and 0112 teach using “deep learning” for predicting “risk variables, [in] an initial risk assessment may be made, which may include a scaled risk score or other suitable indicator to quantify the risk of insuring the user given the likelihood, for example (in the case of a life or health insurance policy) of the various medical-related conditions occurring within some future time horizon that coincides with the insurance coverage”; and “information collected and/or analyzed may pertain to domesticated animals (e.g., dogs, cats, thoroughbreds, etc.) and/or livestock” and be in the form of “insurance records”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Kojima’s teaching of animal still image matching via a neural network for disease prediction, as modified by Gibbs’ teaching of dog disease predicting from input data including video images, to include Hayward’s teaching of deep learning for predicting risk variables of animal health over a future time horizon in order to “automate and improve upon the efficiency and accuracy of existing insurance loss mitigation and prevention, and claims handling processes” (Hayward, paragraph 0041).
Regarding claim 2, the combination of Kojima, Gibbs, and Hayward teach the analogous claim limitations of claim 1 with the same motivations to combine; and further teach and further comprising a premium calculation processor that calculates the animal's insurance premium using the predicted determination result output by the above processor using the deep learning model (Hayward, paragraph 0056 teach “identify one or more intervening actions that, when executed by the user within a future time period, reduce the initial level of risk associated with insuring the user to a second level of risk”),
and is capable of outputting the insurance premium for the animal, excluding humans applying for pet insurance (Hayward, paragraph 0056 “the machine-learning analytics engine 120.1 may calculate pricing (e.g., premiums) associated with the initial and the reduced level of risk, and transmit this information and/or the one or more intervening actions to a computing device (e.g., client device 102) for presentation to the user”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Kojima’s teaching of animal still image matching via a neural network for disease prediction, as modified by Gibbs’ teaching of dog disease predicting from input data including video images, to include Hayward’s teaching of deep learning for predicting risk variables of animal health over a future time horizon and premium calculations in order to “automate and improve upon the efficiency and accuracy of existing insurance loss mitigation and prevention, and claims handling processes” (Hayward, paragraph 0041 and 0056).
Regarding claim 3, the combination of Kojima, Gibbs, and Hayward teach all the claim limitations of claim 1 above; and further teach wherein the input still image is an image obtained by imaging a face of the animal from a front (Kojima, paragraphs 0054-0055, 0083-0089, and Figs. 12-13 teach a device’s “video camera” (interface unit) capturing a video and extracting a “still image” of an animal showing its nose and eyes).
Kojima at least implies wherein the input still image is an image obtained by imaging the face of the animal from the front, however Gibbs teaches wherein the input still image is an image obtained by imaging a face of the animal from a front (paragraphs 0055, 0065, 0070, 0075, 0092-0093, and Figs. 7 and 15 teach a machine learning engine model is trained based on “real-time” or historic collected data from a certain time including the video data of a dog’s head and face; and Fig. 13 further depicts data including a still facial image from the front)
Kojima, Gibbs, and Hayward are combinable for the same rationale as set forth above with respect to claim 1.
Regarding claim 4, the combination of Kojima, Gibbs, and Hayward teach all the claim limitations of claim 1 above; and further teach an insurance fee calculation system for inputting a facial image of an animal to be in coverage of insurance to the disease prediction system according to claim 1 and determining an insurance fee of the animal in accordance with the output prediction of contraction of a disease (Gibbs, paragraphs 0046, 0055, 0065, 0070, 0075 and Figs. 7, 10, and 15 teach the camera data being utilized in a ML model for predicting disease as mapped in claim 1; and further paragraphs 0043 and 0065 teach “Merely presented as one example of many behavior data driven commercial opportunities 212, current and historical data indicative of a generally healthy pet lifestyle may be of commercial value to pet health insurance desiring to provide lower premium insurance (determining an insurance fee of the animal) to healthier animals that are will have fewer and lower cost insurance claims when compared to (determining an insurance fee of the animal) animals with a data history (inputting a facial image of an animal to be in coverage of insurance to the disease prediction system…in accordance with the output prediction of contraction of a disease) living unhealthy lifestyles”).
Kojima, Gibbs, and Hayward are combinable for the same rationale as set forth above with respect to claim 1.
Regarding claim 6, Kojima teaches a disease prediction method including the steps of: preparing a facial still image of an animal excluding a human (paragraphs 0054-0055 and 0083-0089 teach a device’s “video camera” capturing a video and extracting (preparing) a “still image” of an animal showing its nose and eyes), and
inputting the facial image to a deep learning model and outputting a prediction by a computer regarding whether the animal may contract a disease within a predetermined period from the input facial still image of the animal using the deep learning model (paragraphs 0181-0182 teach a computer with a processor for performing the embodiments of the disclosure, including paragraphs 0054-0055, 0083-0089, 0165, and Fig. 12 teaching of identifying a “degree of similarity using known image pattern recognition technologies such as, e.g., a Bayesian recognition method, subspace method, and neural network” between the extracted still image and stored image patterns of diseases, for estimating the submitted animal disease for a specific time (predetermined period) of the still image extracted from the video);
While Kojima teaches utilizing neural network for image pattern recognition that is well known to requiring training on the type of data it is predicting, Kojima does not explicitly teach inputting the facial image to a deep learning model and outputting a prediction by a computer regarding whether the animal may contract a disease in future within a predetermined period from the input facial still image of the animal using the deep learning model; characterized in that the deep learning model is a learned model that performs learning by using facial still images of animals excluding humans and the presence or absence of a contracted disease within a predetermined period from the time of imaging the animals as training data, inputs a facial still image of an animal, and outputs a prediction regarding whether the animal may contract a disease in future within a predetermined period.
Gibbs inputting the facial image to a deep learning model and outputting a prediction by a computer regarding whether the animal may contract a disease in future within a predetermined period from the input facial still image of the animal using the deep learning model; characterized in that the deep learning model is a learned model that performs learning by using facial still images of animals excluding humans and the presence or absence of a contracted disease within a predetermined period from the time of imaging the animals as training data, inputs a facial still image of an animal, and outputs a prediction regarding whether the animal may contract a disease in future within a predetermined period (paragraphs 0055, 0065, 0070, 0075, 0092-0093, and Figs. 7 and 15 teaches “ML engine identifies co-occurring behavioral anomalies…[and] match[es] the pattern to the symptomology expert system resulting in a predictive diagnosis of epilepsy”, and automatically contacts a vet. The system trains the ML engine with the “addition of a treatment to the pet's record [from the vet], and specifically a treatment targeting the co-occurring anomalous behavior generates a new rule to provide comparative analysis of the symptomology at future occurrences of the co-occurring anomalies to the first recorded pattern of the co-occurring anomalies (predicts occurrence of future disease)”. Further the model is trained based on “real-time” or historic collected data from a certain time including the video data of a dog’s head and face; and Fig. 13 further depicts data including a still facial image from the front).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Gibbs’ teaching of dog disease predicting from input data including video images in Kojima’s teaching of animal still image matching via a neural network for disease prediction in order to “more accurately prescribe treatment protocols based on the pet's historical, clinical and predictive diagnostic information” (Gibbs, paragraph 0010).
Further the combination at least implies inputting the facial image to a deep learning model and outputting a prediction by a computer regarding whether the animal may contract a disease in future within a predetermined period from the input facial still image of the animal using the deep learning model;…inputs a facial still image of an animal, and outputs a prediction regarding whether the animal may contract a disease in future within a predetermined period; however Hayward teaches inputting the facial image to a deep learning model and outputting a prediction by a computer regarding whether the animal may contract a disease in future within a predetermined period from the input facial still image of the animal using the deep learning model;…inputs a facial still image of an animal, and outputs a prediction regarding whether the animal may contract a disease in future within a predetermined period (paragraphs 0021-0022, 0059, 0102, 0104, 0109, and 0112 teach using “deep learning” for predicting “risk variables, [in] an initial risk assessment may be made, which may include a scaled risk score or other suitable indicator to quantify the risk of insuring the user given the likelihood, for example (in the case of a life or health insurance policy) of the various medical-related conditions occurring within some future time horizon that coincides with the insurance coverage”; and “information collected and/or analyzed may pertain to domesticated animals (e.g., dogs, cats, thoroughbreds, etc.) and/or livestock”).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Kojima’s teaching of animal still image matching via a neural network for disease prediction, as modified by Gibbs’ teaching of dog disease predicting from input data including video images, to include Hayward’s teaching of deep learning for predicting risk variables of animal health over a future time horizon in order to “automate and improve upon the efficiency and accuracy of existing insurance loss mitigation and prevention, and claims handling processes” (Hayward, paragraph 0041).
Regarding claim 7, the combination of Kojima, Gibbs, and Hayward teach all the claim limitations of claim 2 above; and further teach wherein the animal is a dog (Kojima, paragraphs 0001, 0054-0055, 0083-0089, and Figs. 12-13 teach a device’s “video camera” (interface unit) capturing a video and extracting a “still image” of an animal showing its nose and eyes including a dog).
Regarding claim 8, the combination of Kojima, Gibbs, and Hayward teach all the claim limitations of claim 2 above; and further teach an insurance fee calculation system for inputting a facial image of an animal to be in coverage of insurance to the disease prediction system according to claim 2 and determining an insurance fee of the animal in accordance with the output prediction of contraction of a disease (Gibbs, paragraphs 0046, 0055, 0065, 0070, 0075 and Figs. 7, 10, and 15 teach the camera data being utilized in a ML model for predicting disease as mapped in claim 1; and further paragraphs 0043 and 0065 teach “Merely presented as one example of many behavior data driven commercial opportunities 212, current and historical data indicative of a generally healthy pet lifestyle may be of commercial value to pet health insurance desiring to provide lower premium insurance (determining an insurance fee of the animal) to healthier animals that are will have fewer and lower cost insurance claims when compared to (determining an insurance fee of the animal) animals with a data history (inputting a facial image of an animal to be in coverage of insurance to the disease prediction system…in accordance with the output prediction of contraction of a disease) living unhealthy lifestyles”).
Kojima, Gibbs, and Hayward are combinable for the same rationale as set forth above with respect to claim 1.
Regarding claim 9, the combination of Kojima, Gibbs, and Hayward teach all the claim limitations of claim 3 above; and further teach an insurance fee calculation system for inputting a facial image of an animal to be in coverage of insurance to the disease prediction system according to claim 3 and determining an insurance fee of the animal in accordance with the output prediction of contraction of a disease (Gibbs, paragraphs 0046, 0055, 0065, 0070, 0075 and Figs. 7, 10, and 15 teach the camera data being utilized in a ML model for predicting disease as mapped in claim 1; and further paragraphs 0043 and 0065 teach “Merely presented as one example of many behavior data driven commercial opportunities 212, current and historical data indicative of a generally healthy pet lifestyle may be of commercial value to pet health insurance desiring to provide lower premium insurance (determining an insurance fee of the animal) to healthier animals that are will have fewer and lower cost insurance claims when compared to (determining an insurance fee of the animal) animals with a data history (inputting a facial image of an animal to be in coverage of insurance to the disease prediction system…in accordance with the output prediction of contraction of a disease) living unhealthy lifestyles”).
Kojima, Gibbs, and Hayward are combinable for the same rationale as set forth above with respect to claim 1.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yu et al (US Pub 20200005023) teaches utilizing deep learning for facial recognition in images of animals to diagnose diseases.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLINT MULLINAX whose telephone number is 571-272-3241. The examiner can normally be reached on Mon - Fri 8:00-4:30 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.M./Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123