DETAILED ACTION
This action is in reference to the communication filed on 10 DEC 2024
Claims 1-21 are present and have been examined.
Claim Objections
Claims 1, 22, 21, 6, 16, (and claims 7, 17, by virtue of dependency on claims 6, 16) are objected to because of the following informalities: These claims recite various spelling errors of the word “abnormality.” Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1, 11, 21 (and by extension, claims 2-10, 12-20) rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 11, 21 recites the limitation "the indication indicating a single skin abnormality" . The prior limitation recites “generating an indication…the indication representing a likelihood that the input image shows two or more skin abnormalities;” There is insufficient antecedent basis for this limitation in the claim, as the initial indication itself is claimed to represent a likelihood of two or more skin abnormalities. As such, the limitation regarding the indication indicating a single skin abnormality would lack antecedent basis. This discrepancy renders the scope of the claim indefinite as it is unclear if there are multiple indications generated or multiple outcomes of an indication. This is particularly relevant as per the following dependent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. As explained below, the claim(s) are directed to an abstract idea without significantly more.
Step One: Is the Claim directed to a process, machine, manufacture or composition of matter? YES
With respect to claim(s) 1-21, the independent claim(s) 1, 11, 21 recite(s) a statutory category of invention.
Step 2A – Prong One: Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea? YES
With respect to claim(s) 1-21 the independent claim(s) (claims 1, 11, 21) is/are directed, in part, to:
Claim 1: A method, comprising:
receiving,
generating an indication by applying a machine-learned model to the input image, the indication representing a likelihood that the input image shows two or more skin abnormalities;
responsive to the indication indicating a single skin abnormality in the input image:
generating
responsive to the indication indicating two or more skin abnormalities in the input image, outputting a message for display.
These claim elements are considered to be abstract ideas because they are directed to a method of mental processes, which includes concepts performed in the mind such as observation, evaluation, judgment and opinion. Generating an indication of a likelihood, generating a diagnosis from a picture, and outputting a message are all examples of concepts which can be performed in the human mind.
The claims further recite mathematical concepts – i.e. relationships, formulas, equations, and/or calculations – the use of applying a machine learned model to determine a likelihood of two or more abnormalities is a mathematical formula/equation, and/or a calculation.
The claims further recite the concept of certain methods of organizing human activity, including managing personal behavior or relationships/interactions between people, including following rules and/or instructions. In the instant case the claims recite a method of receiving a request and subsequently providing a diagnosis and communication thereof to a user in response to the request, i.e. following a set of instructions to do so.
If a claim limitation, under its broadest reasonable interpretation, covers mental processes, mathematical concepts, and/or certain methods of organizing human activity, the claims is/are found to recite an abstract idea. Accordingly, the claim recites an abstract idea.
Step 2A – Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? NO.
This judicial exception is not integrated into a practical application. In particular, each of the claim(s) recite(s) additional elements such as a “client device” with a ”display,” and wherein the diagnosis is “computerized.” Claim 11 further recites a “non-transitory computer program product comprising a computer readable storage medium comprising a computer program code;” and claim 21 recites “a computer system comprising one or more processors; a non-transitory computer program product comprising a computer readable storage medium comprising computer program code for execution by one or more processors.” The client device, display, and “computerized” diagnosis common to all claims, as well as the computer program products/CRM/processors in claims 11, 21 are recited at a high level of generality and as such amount to no more than adding the words “apply it” to the judicial exception, or mere instructions to implement the abstract idea on a computer, or merely uses the computer as a tool to perform the abstract idea (see MPEP 2106.05f), or generally links the use of the judicial exception to a particular technological field of use/computing environment (see MPEP 2106.05h). Examiner finds no improvement to the functioning of the computer or any other technology or technical field in the elements as identified above and as claimed (see MPEP 2106.05a), nor any other application or use of the judicial exception in some meaningful way beyond a general like between the use of the judicial exception to a particular technological environment (see MPEP 2106.05e). Further, Examiner notes that storage of data (i.e. via the CRM) and the display of a message on a screen, as well as the general sending/receiving of data from the client device, are generally considered to be examples of adding insignificant extra solution activity to the judicial exception(s) identified (see MPEP 2106.05g).
Accordingly, this/these additional element(s) do(es) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO.
The independent claim(s) is/are additionally directed to claim elements common to claims 1, 11, 21, including “client device” with a ”display,” and wherein the diagnosis is “computerized.” Claim 11 further recites a “non-transitory computer program product comprising a computer readable storage medium comprising a computer program code;” and claim 21 recites “a computer system comprising one or more processors; a non-transitory computer program product comprising a computer readable storage medium comprising computer program code for execution by one or more processors.”
When considered individually, the above identified claim elements only contribute generic recitations of technical elements to the claims. It is readily apparent, for example, that the claim is not directed to any specific improvements of these elements. Examiner looks to Applicant’s specification in:
[0060] The client device 110 may be a computing device such as a smartphone with an operating system such as ANDROID® or APPLE® IOS®, a tablet computer, a laptop computer, a desktop computer, or any other type of network-enabled device that includes or can be configured to connect with a camera. In another embodiment, the client device 110 is a headset including a computing device or a smartphone camera for generating an augmented reality (AR) environment to the user, or a headset including a computing device for generating a virtual reality (VR) environment to the user. A typical client device 110 includes the hardware and software needed to connect to the network 122 (e.g., via WiFi and/or 4G or 5G or other wireless telecommunication standards).
[0093] Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
[0094] Embodiments of the invention may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
These passages, as well as others, makes it clear that the invention is not directed to a technical improvement. When the claims are considered individually and as a whole, the additional elements noted above, appear to merely apply the abstract concept to a technical environment in a very general sense – i.e. a generic computer receives information from another generic computer, processes the information and then sends information back. The most significant elements of the claims, that is the elements that really outline the inventive elements of the claims, are set forth in the elements identified as an abstract idea. The fact that the generic computing devices are facilitating the abstract concept is not enough to confer statutory subject matter eligibility.
As per dependent claims 2-10, 12-20:
Dependent claims 2, 12, do not recite any additional abstract ides, but do recite the non-abstract element of a database. Examiner finds that the storage or retrieval of information from a database is insignificant extra solution activity, and as such does not support a finding of a practical application or significantly more.
Dependent claims 3-10, 13-20 are not directed any additional abstract ideas and are also not directed to any additional non-abstract claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above – such as additional clarification of the mathematical relationships such as thresholds or modeling, as well as the further steps of providing messages based on those relationships, and the mental determinations of threshold considerations. While these descriptive elements may provide further helpful context for the claimed invention these elements do not serve to confer subject matter eligibility to the invention since their individual and combined significance is still not heavier than the abstract concepts at the core of the claimed invention.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s)1-21 is/are rejected under 35 U.S.C. 102a2 as being anticipated by Flank (US 20190220738 A1).
In reference to claim 1, 11, 21:
Flank teaches: A method, comprising:
A non-transitory computer program product comprising a computer readable storage medium comprising computer program code (at least [fig 1, 2 and related text] for:
A computer system comprising one or more processors, and a non-transitory compute program product comprising a computer readable storage medium comprising computer program code for execution by the one or more processors, the computer program code (at least [fig 1, 2 and related text]) for:
receiving, from a client device, a request for diagnoses of skin abnormalities presented in an input image (at least [fig 1 and related text] “In use, the client device 104 is used to acquire an image. As an example, a human hand 120 is shown with a skin blemish 122 shown thereon. The client device 104 acquires one or more images of the skin blemish 122 and sends the image(s) to the skin condition analysis server 102 via network 124.” See also [fig 7 and related text] image 702);
generating an indication by applying a machine-learned model to the input image, the indication representing a likelihood that the input image shows two or more skin abnormalities (at least [fig 1 and related text] “The skin condition analysis server 102 may identify a likely skin condition based on the uploaded images from client device 104.” at [fig 7 and related text] “The skin condition analysis server, using a machine learning system comprising a first neural network for evaluation and a second neural network configured and disposed to adjust the hyperparameters of the first neural network, provides one or more likely skin conditions, and may further provide a probability rating for each condition. The probability rating can serve as a score, confidence level, or other indicator of the likelihood of the diagnosed skin condition, based on the submitted image(s). “);
responsive to the indication indicating a single skin abnormality in the input image:
generating a computerized diagnosis of the single skin abnormality, and outputting the computerized diagnosis for display on the client device (at least [fig 7 and related text] “The skin condition analysis server, using a machine learning system comprising a first neural network for evaluation and a second neural network configured and disposed to adjust the hyperparameters of the first neural network, provides one or more likely skin conditions, and may further provide a probability rating for each condition. The probability rating can serve as a score, confidence level, or other indicator of the likelihood of the diagnosed skin condition, based on the submitted image(s). The analysis results are provided to the client and rendered on the user interface 700 in field 704. As can be seen in field 704, two conditions (rosacea, seborrheic eczema), and two corresponding probabilities (89%, 77%) are shown. In practice, more or fewer conditions may be shown”); and
responsive to the indication indicating two or more skin abnormalities in the input image, outputting a message for display (at least [fig 1 and related text] “The diagnosed skin condition, along with a recommended treatment, are then transmitted to client device 104 and rendered on an electronic display to provide the information to a user.” At [fig 7 and related text] “The analysis results are provided to the client and rendered on the user interface 700 in field 704. As can be seen in field 704, two conditions (rosacea, seborrheic eczema), and two corresponding probabilities (89%, 77%) are shown. In practice, more or fewer conditions may be shown. User interface 700 further includes a treatment field 706. The treatment field 706 may show text, video, images, and/or other multimedia elements to provide treatment instructions and/or procedures for the skin condition(s) shown in field 704.”)
In reference to claim 2, 12:
Flank further teaches: responsive to the indication indicating the single skin abnormality, accessing a diagnosis model from a database (at least [fig 1 and related text] “The skin condition analysis server 102 implements a neural network trained with supervised learning techniques. The skin condition analysis server 102 may identify a likely skin condition based on the uploaded images from client device 104. “ see also [035] “Rather, the subject image diagnosis and/or treatment results are output at 314 based on results from the trained machine learning skin condition analysis module 308. In this way, a user of disclosed embodiments can diagnose skin conditions such as various types of rashes and/or other skin diseases from the convenience of their own home.”);
generating a prediction for the skin abnormality in the input image by applying the diagnosis model to the input image (at least [fig 1 and related text] Diagnosis of likely skin condition by applying model as identified above; see also [035]); and
generating the computerized diagnosis for the input image from the prediction (at least [fig 1 and related text, as well as 035] “Rather, the subject image diagnosis and/or treatment results are output at 314 based on results from the trained machine learning skin condition analysis module 308. In this way, a user of disclosed embodiments can diagnose skin conditions such as various types of rashes and/or other skin diseases from the convenience of their own home.” See also [fig 7 and related text] “The image is provided to the skin condition analysis server 102 via network 124. The skin condition analysis server, using a machine learning system comprising a first neural network for evaluation and a second neural network configured and disposed to adjust the hyperparameters of the first neural network, provides one or more likely skin conditions…”).
In reference to claim 3, 13:
Flank further teaches: wherein the machine-learned model is configured as a neural network architecture including at least a set of layers of nodes, wherein each layer is connected to a previous layer through a respective set of trained weights (at least [fig 4, 5 and related text including 038-042] “The output of the output layer 508 is input to a loss function analyzer 510 that outputs a loss value based on the output of output layer 508. The loss is input to an evaluation optimizer 512. The evaluation optimizer implements a function that takes loss and adjusts the weights of elements in the evaluation neural network to minimize that loss.“
In reference to claim 4, 14:
Flank further teaches: wherein the output message displays that a computerized diagnoses cannot be output for the input image (at least [fig 7 and related text] treatment in space 706 is not appropriate for either of the listed conditions in 704; at [030] “ In some embodiments, the treatment database 136 may include multiple entries for a particular condition, based on various factors such as severity, age of patient, gender of patient, size of patient, and/or preexisting medical conditions. In this way, a more individualized treatment plan can be achieved.” – treatment will vary based on the information other than the condition thresholds themselves).
In reference to claim 5, 15:
Flank further teaches: responsive to the indication indicating neither a single skin abnormality nor two or more skin abnormalities, outputting a second message for display (at least [fig 7 and related text] Message 704 would show a 0% probability, i.e. the text of the message would be different.)
In reference to claim 6, 16:
Flank further teaches: wherein generating the indication further comprises:
generating a first likelihood and a second likelihood from the machine-learned indicator model, wherein the method further comprises determining that the indication indicates the input image presents a single skin abnormality when the first likelihood is above a first threshold, and determining that the indication indicates the input image presents two or more skin abnormalities when the second likelihood is above a second threshold (at least [fig 7 and related text] “The skin condition analysis server, using a machine learning system comprising a first neural network for evaluation and a second neural network configured and disposed to adjust the hyperparameters of the first neural network, provides one or more likely skin conditions, and may further provide a probability rating for each condition. The probability rating can serve as a score, confidence level, or other indicator of the likelihood of the diagnosed skin condition, based on the submitted image(s). The analysis results are provided to the client and rendered on the user interface 700 in field 704. As can be seen in field 704, two conditions (rosacea, seborrheic eczema), and two corresponding probabilities (89%, 77%) are shown. In practice, more or fewer conditions may be shown. “).
In reference to claim 7, 17:
Flank further teaches: responsive to both the first likelihood being below the first threshold and the second likelihood being below the second threshold, outputting a second message for display (at least [fig 7 and related text] “The skin condition analysis server, using a machine learning system comprising a first neural network for evaluation and a second neural network configured and disposed to adjust the hyperparameters of the first neural network, provides one or more likely skin conditions, and may further provide a probability rating for each condition. The probability rating can serve as a score, confidence level, or other indicator of the likelihood of the diagnosed skin condition, based on the submitted image(s). The analysis results are provided to the client and rendered on the user interface 700 in field 704. As can be seen in field 704, two conditions (rosacea, seborrheic eczema), and two corresponding probabilities (89%, 77%) are shown “ – i.e. the content of the message would change based on the percentage(s)).
In reference to claim 8, 18:
Flank further teaches: wherein the machine-learned model is also a diagnosis model further configured to receive an image and generate predictions for one or more skin abnormalities presented in the image (at least [fig 1 and related text] “The skin condition analysis server 102 may identify a likely skin condition based on the uploaded images from client device 104.” at [fig 7 and related text] “The skin condition analysis server, using a machine learning system comprising a first neural network for evaluation and a second neural network configured and disposed to adjust the hyperparameters of the first neural network, provides one or more likely skin conditions, and may further provide a probability rating for each condition. The probability rating can serve as a score, confidence level, or other indicator of the likelihood of the diagnosed skin condition, based on the submitted image(s). “).
In reference to claim 9, 19:
Flank further teaches: wherein generating the indication further comprises applying the diagnosis model to the input image to generate an output vector including a set of elements, wherein the predictions for the one or more skin abnormalities are represented by values for a subset of elements in the output vector, and wherein the indication is represented by values of one or more remaining elements in the output vector (at least [047] “The output of the optimization neural network 656 includes tuples of information [Pred, NP] that include the new hyperparameters (NP) and corresponding predicted loss (Pred) for those hyperparameters. At 658, the prune_pred process filters out some of the hyperparameters received from optimization network 656, and passes on some of the hyperparameters (prune_np) to the evaluation neural network 660…. Embodiments can include a computer-implemented method for training a neural network, comprising: obtaining a plurality of training images; providing the plurality of training images to an evaluation neural network, wherein the evaluation neural network comprises an input layer, an output layer, and one or more hidden layers configured between the input layer and the output layer; generating a first set of new hyperparameters for the evaluation neural network; pruning the plurality of the first set hyperparameters to form a subset of the first set of hyperparameters; performing a k-local search to generate a second set of new hyperparameters using the subset of the first set of hyperparameters as seeds; and evaluating the plurality of training images using the second set of new hyperparameters.” See also [fig 3 and related text] for further discussion of layers/input output elements, see [fig 1/7 and related text] for discussion of diagnosis process).
In reference to claim 10, 20:
Flank further teaches: wherein generating the indication further comprises applying the diagnosis model to the input image to generate an output vector including a set of elements, the predictions for the one or more skin abnormalities represented by values for the elements in the output vector, and wherein the input image is determined to present two or more skin abnormalities when differences between values of the set of elements in the output vector are below a threshold (at least [047] “The output of the optimization neural network 656 includes tuples of information [Pred, NP] that include the new hyperparameters (NP) and corresponding predicted loss (Pred) for those hyperparameters. At 658, the prune_pred process filters out some of the hyperparameters received from optimization network 656, and passes on some of the hyperparameters (prune_np) to the evaluation neural network 660…. Embodiments can include a computer-implemented method for training a neural network, comprising: obtaining a plurality of training images; providing the plurality of training images to an evaluation neural network, wherein the evaluation neural network comprises an input layer, an output layer, and one or more hidden layers configured between the input layer and the output layer; generating a first set of new hyperparameters for the evaluation neural network; pruning the plurality of the first set hyperparameters to form a subset of the first set of hyperparameters; performing a k-local search to generate a second set of new hyperparameters using the subset of the first set of hyperparameters as seeds; and evaluating the plurality of training images using the second set of new hyperparameters.” See also [fig 3 and related text] for further discussion of layers/input output elements.”, see [fig 1/7 and related text] for discussion of diagnosis process).
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20200170564, to Jiang, discloses a means of modeling skin abnormalities using images.
US 20200261580 to Willey discloses providing suggested treatments for skin disorders and rashes based on user supplied images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE KOLOSOWSKI-GAGER whose telephone number is (571)270-5920. The examiner can normally be reached Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at 571-270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHERINE . KOLOSOWSKI-GAGER/
Primary Examiner
Art Unit 3687
/KATHERINE KOLOSOWSKI-GAGER/Primary Examiner, Art Unit 3687