Prosecution Insights
Last updated: April 18, 2026
Application No. 17/749,388

MACHINE LEARNING MODEL AND NEURAL NETWORK TO PREDICT OBJECT CHARACTERISTICS FROM DIGITAL IMAGE SIMILARITIES

Final Rejection §101§103§112
Filed
May 20, 2022
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Oracle International Corporation
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the amendments and remarks received 25 August 2025. Claims 1 - 20 are currently pending. Claim Objections The objections to claims 1, 5, 8, 12 and 15 - 20, due to minor informalities, are hereby withdrawn in view of the amendments and remarks received 25 August 2025. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 - 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the similarity score associated with each of the similar object images;" in lines 22 - 23. There is insufficient antecedent basis for this limitation in the claim. Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to what subject matter the Applicant intends to claim in lines 5 - 7 of claim 4. Clarification and appropriate correction are required. In view of the previous version of claim 4 and lines 21 - 23 of claim 1, the Examiner asserts that that lines 5 - 7 of claim 4 appear to have been mistakenly added to the end of claim 4. Therefore, for purposes of examination, the Examiner will treat claim 4 as only requiring lines 1 - 4 of claim 4 and suggests deleting lines 5 - 7 of claim 4. Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which known object images “the known object images” recited on line 11 are referencing. Are they referring to the “group of known object images” recited on lines 6 - 7 of claim 1, the “one or more known object images” recited on line 9 of claim 1 or some other known object images? Clarification and appropriate correction are required. For purposes of examination, the Examiner will treat “the known object images” recited on line 11 of claim 5 as referencing the “one or more known object images” recited on line 9 of claim 1. Claim 5 recites the limitation "the similarity scores between the target product (i) and each of the known products" in lines 12 - 13. There is insufficient antecedent basis for this limitation in the claim. Claim 5 recites the limitation "the set of similar product images" in line 14. There is insufficient antecedent basis for this limitation in the claim. Claim 8 recites the limitation "the historical event data for a given similar product" (emphasis added) in lines 17 - 18. There is insufficient antecedent basis for this limitation in the claim. Claim 8 recites the limitation "the similarity score of the given similar product" in lines 18 - 19. There is insufficient antecedent basis for this limitation in the claim. Claim 11 recites the limitation “the similarity score and historic event combination of a similar image” in lines 4 - 5. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination the Examiner will treat the aforementioned limitation as --the similarity score and historical event data combination of a similar product image of the set of similar product images--. Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which digital pixel data “the digital pixel data” recited on line 13 is referencing. Is it referring to the “digital pixel data of the target product image” recited on lines 8 - 9 of claim 15, the “digital pixel data from a group of known product images” recited on lines 9 - 10 of claim 15 or both of the aforementioned “digital pixel data”? Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as referencing both the “digital pixel data of the target product image” recited on lines 8 - 9 of claim 15 and the “digital pixel data from a group of known product images” recited on lines 9 - 10 of claim 15. Claim 15 recites the limitation "the historical event data for a given similar product" (emphasis added) in line 22. There is insufficient antecedent basis for this limitation in the claim. Claim 15 recites the limitation "the similarity score of the given similar product" in line 23. There is insufficient antecedent basis for this limitation in the claim. Claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which similarity score “the similarity score” recited on line 5, along with subsequent recitations of “the similarity score”, are referencing at least because lines 11 - 12 of claim 15 recite, in part, “generate… a similarity score between the target product image and one or more known product images” and line 23 of claim 15 recites, in part, “the similarity score of the given similar product”. Thus, the Examiner asserts that it is unclear as to which, if any, similarity score from the “similarity score between the target product image and one or more known product images” recited on lines 11 - 12 of claim 15 and “the similarity score of the given similar product” recited on line 23 of claim 15 “the similarity score” recited on line 5 of claim 18, along with subsequent recitations of “the similarity score”, are referencing. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as referencing “the similarity score of the given similar product” recited on line 23 of claim 15. Claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which historical event data “the historical event data” recited on line 6 is referencing at least because lines 17 - 19 of claim 15 recite, in part, “for each similar product image of the set of similar product images, retrieve product attributes including historical event data” and line 22 of claim 15 recites, in part, “the historical event data for a given similar product”. Thus, the Examiner asserts that it is unclear as to which, if any, historical event data from the “historical event data” associated with each similar product image recited on lines 17 - 19 of claim 15 and “the historical event data for a given similar product” recited on line 22 of claim 15 “the historical event data” recited on line 6 of claim 18 is referencing. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat “the historical event data” recited on line 6 of claim 18 as referencing “the historical event data for a given similar product” recited on line 22 of claim 15. Claim 20 recites the limitation “the similarity score and historic event combination of a similar image” in lines 4 - 5. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination the Examiner will treat the aforementioned limitation as --the similarity score and historical event data combination of a similar product image of the set of similar product images--. Claims 2, 3, 6, 7, 9, 10, 12 - 14, 16, 17 and 19 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, due to being dependent upon a rejected base claim but would be withdrawn from the rejection if their base claim overcomes the rejection. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1 - 6 and 8 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, an abstract idea, without significantly more. The claims recite, at a high level of generality, comparing at least digital pixel data of the target object/product image to digital pixel data from a group of known object/product images, generating a similarity score between the target object/product image and one or more known object/product images from the group of known object/product images based at least on the digital pixel data, identifying a set of similar object/product images based at least in part on the similarity score of the one or more known object/product images, for each similar object/product image of the set of similar object/product images, retrieving object/product attributes including historical event data associated with each similar object/product image, and generating a predicted characteristic model including a predicted characteristic for the target object/product represented in the target object/product image based at least on the historical event data for a given similar object/product combined with the similarity score of the given similar object/product. The limitation of “comparing… at least digital pixel data of the target object image to digital pixel data from a group of known object images”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of generic computer components. That is, other than reciting “the machine learning model”, “a computing system comprising at least one processor” (see claim 1), “a non-transitory computer-readable medium storing computer-executable instructions that, when executed by a computer including a processor, cause the computer to perform functions configured by the computer-executable instructions” (see claim 8) and “a computing system, comprising: at least one processor connected to at least one memory; and a non-transitory computer readable medium including instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to” (see claim 15), nothing in the claim element precludes the step from practically being performed in the mind. The Examiner asserts that the claim(s) do not provide any details nor limit how the machine learning model operates or how the comparisons are made, and the plain meaning of “comparing” encompasses mental observations or evaluations, e.g., a user mentally identifying similarities and/or differences between two images. Under its broadest reasonable interpretation when read in light of the specification, the “comparing” encompasses mental observations and/or evaluations that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed comparing digital pixel data of the target object/product image to digital pixel data from a group of known object/product images encompasses observing an image of an object/product alongside a plurality of images of objects/products and performing an evaluation by mentally identifying similarities and/or differences between the images. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III). Similarly, the limitation of “generating… a similarity score between the target object image and one or more known object images from the group of known object images based at least on the digital pixel data”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of the aforementioned generic computer components. That is, other than reciting the aforementioned generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. The Examiner asserts that the claim(s) do not provide any details nor limit how the machine learning model operates or how the similarity score is generated, and the plain meaning of “generating” encompasses mental observations or evaluations, e.g., a user mentally judging and/or ranking how similar or dissimilar two images are. Under its broadest reasonable interpretation when read in light of the specification, the “generating” encompasses mental observations and/or evaluations that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed generating a similarity score encompasses observing an image of an object/product alongside a plurality of images of objects/products and performing an evaluation by mentally judging and/or ranking how similar or dissimilar the images are. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III). Likewise, the limitation of “identifying… a set of similar object images based at least in part on the similarity score of the one or more known object images”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of the aforementioned generic computer components. That is, other than reciting the aforementioned generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. The Examiner asserts that the claim(s) do not provide any details nor limit how the machine learning model operates or how the similar images are identified based in part on the similarity score, and the plain meaning of “identifying” encompasses mental observations or evaluations, e.g., a user mentally comparing two images and judging whether they are similar enough to each other to be considered similar images. Under its broadest reasonable interpretation when read in light of the specification, the “identifying” encompasses mental observations and/or evaluations that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed identifying a set of similar object/product images encompasses observing an image of an object/product alongside a plurality of images of objects/products and performing an evaluation by mentally judging/identifying which images of the plurality of images are most similar to the image of the object/product. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III). Relatedly, the limitation of “for each similar object image of the set of similar object images, retrieving object attributes including historical event data associated with each similar object image”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of the aforementioned generic computer components. That is, other than reciting the aforementioned generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. Under its broadest reasonable interpretation when read in light of the specification, the “retrieving” encompasses mental observations and/or evaluations that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed retrieving object/product attributes including historical event data associated with each similar object/product image encompasses observing images of objects/products and mentally recalling characteristics and/or information related to the objects/products. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III). Similarly, the limitation of “generating a predicted characteristic model including a predicted characteristic for the target object represented in the target object image based at least on the historical event data for a given similar object combined with the similarity score of the given similar object”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind using observation, evaluation, judgment, and opinion but for the recitation of the aforementioned generic computer components. That is, other than reciting the aforementioned generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. Under its broadest reasonable interpretation when read in light of the specification, the “generating” encompasses mental observations and/or evaluations that are practically performed in the human mind. For example, but for the recitation of the aforementioned generic computer components, the claimed generating a predicted characteristic model including a predicted characteristic for the target object encompasses mentally thinking about characteristics and information related to objects/products that are similar to a target object/product and performing an evaluation by mentally forming an opinion of a characteristic that can be associated with the target object/product based upon the characteristics and information related to the similar objects/products. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, with or without the use of a physical aid such as pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP § 2106.04(a)(2)(III). Accordingly, the claims recite an abstract idea. In addition, with regards to dependent claims 5 and 12, the claims recite an abstract idea that can either be considered a mental process or a mathematical concept. For example, the limitation of generating the predicted characteristic based on a function, as drafted, is a process that, under its broadest reasonable interpretation, encompasses mathematical concepts that can be performed mentally. The Examiner notes that, under circumstances in which claim limitations fall within different groupings of abstract ideas, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements of: “a computing system comprising at least one processor”, “inputting, to a machine learning model, a target object image in digital form that represents a target object”, “generating an electronic message with the predicted characteristic for the target object”, “transmitting the electronic message to a remote computer”, “a non-transitory computer-readable medium storing computer-executable instructions that, when executed by a computer including a processor, cause the computer to perform functions configured by the computer-executable instructions” and “a computing system, comprising: at least one processor connected to at least one memory; and a non-transitory computer readable medium including instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to [perform functions]”. The limitations of “a computing system comprising at least one processor”, “an electronic message”, “transmitting the electronic message to a remote computer”, “a non-transitory computer-readable medium storing computer-executable instructions that, when executed by a computer including a processor, cause the computer to perform functions configured by the computer-executable instructions” and “a computing system, comprising: at least one processor connected to at least one memory; and a non-transitory computer readable medium including instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to [perform functions]” are recited at a high level of generality such that they amount to no more than mere instructions to apply the exception using generic computer components. Furthermore, the claims as a whole merely describe how to generally “apply” the concept of predicting a characteristic for a target object/product in a computer environment. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. See MPEP § 2106.05(f). Further, the limitations of “inputting… a target object image in digital form that represents a target object”, “generating an electronic message with the predicted characteristic for the target object” and “transmitting the electronic message to a remote computer” are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP § 2106.05(g). In addition, all uses of the recited judicial exception require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claims. These limitations amount to necessary data gathering. See MPEP § 2106.05. Additionally, the elements of the aforementioned limitations amount to recording and transmitting digital images and information by use of conventional or generic technology in a nascent but well-known environment and are well-understood, routine, conventional activity. See MPEP § 2106.05(d). Additionally, the limitation of “a machine learning model” provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP § 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Moreover, the machine learning model is used to generally apply the abstract idea without placing any limits on how the machine learning model functions. See MPEP 2106.05(f). Additionally, the recitation of “a machine learning” merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element of “a machine learning” limits the identified judicial exception “predicting a characteristic for a target object/product”, this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements of: “a computing system comprising at least one processor”, “inputting, to a machine learning model, a target object image in digital form that represents a target object”, “generating an electronic message with the predicted characteristic for the target object”, “transmitting the electronic message to a remote computer”, “a non-transitory computer-readable medium storing computer-executable instructions that, when executed by a computer including a processor, cause the computer to perform functions configured by the computer-executable instructions” and “a computing system, comprising: at least one processor connected to at least one memory; and a non-transitory computer readable medium including instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to [perform functions]” do not add a meaningful limitation to the abstract idea because they merely perform insignificant pre/post extrasolution activity, mere data gathering and output, and/or amount to no more than mere instructions to apply the abstract idea using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible. In addition, with regards to dependent claims 2 - 6, 9 - 14 and 16 - 20, the Examiner asserts that the claims are also directed to the abstract idea of predicting a characteristic for a target object/product and merely further limit the abstract idea claimed in independent claims 1, 8 and 15, by further identifying that the predicted characteristic for the target object/product can be adjusted, by further identifying that certain objects/products should be have more influence on the predicted characteristic for the target object/product than others, and/or by further identifying types of images that should not be considered as being similar to the target object/product image. However, the Examiner asserts that a more detailed abstract idea remains an abstract idea and that none of the limitations of the dependent claims considered as an ordered combination provide eligibility because taken as a whole the claims merely instruct the practitioner to apply the abstract idea using generic computer components. The claims are not eligible. Response to Arguments Applicant's arguments filed 25 August 2025 have been fully considered but they are not persuasive. On pages 11 - 13 of the remarks the Applicant’s Representative argues that claims 1 - 6 and 8 - 20 are directed towards are directed towards statutory subject matter under 35 U.S.C. 101. The Applicant’s Representative argues that the instant invention is “directed to addressing a computer-centric problem (e.g., processing and predicting from vast amounts of pixel-level image data), not to replicating a human mental process.” The Applicant’s Representative argues that humans “cannot feasibly or practically evaluate millions of pixel values across massive datasets in their minds, nor can they consistently generate numerical similarity scores from such comparisons.” Furthermore, the Applicant’s Representative argues that “the claimed invention provides a technological improvement over prior systems” as it “allows for automated and scalable analysis of image databases far beyond the capacity of human cognition.” Thus, the Applicant’s Representative argues that “the claims do not recite steps that can be practically performed by the human mind” and instead “recite a series of specific, computer-implemented operations that improve the functioning of prediction systems from unknown objects.” Therefore, the Applicant’s Representative argues that “the claimed subject matter integrates any alleged abstract idea into a practical application, providing a concrete technological improvement and satisfying the requirements of 35 U.S.C. §101.” The Examiner respectfully disagrees. Initially, the Examiner asserts that the broadest reasonable interpretation of the instant claims does not require that the claimed images are composed of millions of pixels or any specific minimum number of pixels, nor that the one or more known images comprises thousands or millions of images. Furthermore, the Examiner asserts that claims 1 - 6 and 8 - 20 are still found to be directed to a judicial exception, an abstract idea, without significantly more. The Examiner asserts that the claims are still found to be directed to mental processes that can practically be performed in the human mind at least because, as discussed above in section 22 of the instant Office Action, the generic computer components are merely used as a tool to perform the abstract idea of predicting a characteristic for a target object/product. Moreover, the Examiner asserts that if “a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea” and that claims “can recite a mental process even if they are claimed as being performed on a computer”, see at least MPEP § 2106.04(a)(2)(III)(B) and MPEP § 2106.04(a)(2)(III)(C). In addition, the Examiner asserts that just using a computer and a machine learning model to compare pixel data of images, generate similarity scores and generate a predicted characteristic does not improve the functioning of predictions systems at least because the generic computer components are merely used as a tool to perform the abstract idea of predicting a characteristic for a target object/product. Furthermore, the Examiner asserts that to “show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology.” See at least MPEP § 2106.05(a)(II) and MPEP § 2106.05(f). Additionally, the Examiner asserts that requiring digital images, generic computer components and a machine learning model to implement the abstract idea is insufficient to integrate the recited judicial exception into a practical application at least because they merely link the use of the recited judicial exception to a particular technological environment or field of use and/or merely confine the use of the abstract idea to a particular technological environment (machine learning). See at least MPEP § 2106.04(d) and MPEP § 2106.05(h). Therefore, the Examiner asserts that the instant claims are still found to be directed towards ineligible subject matter under 35 U.S.C. 101, a judicial exception, an abstract idea, without significantly more. On pages 14 - 18 of the remarks the Applicant’s Representative argues that Craparotta et al. fail to teach or suggest “comparing, by the machine learning model, at least digital pixel data of the target object image to digital pixel data from a group of known object images”. The Applicant’s Representative argues that Craparotta et al. fail to teach or suggest the aforementioned disputed claim limitation at least because the model of Craparotta et al. “transforms images into abstract feature vectors and integrates them with non-image attributes” and “never compares digital pixel data of a target image directly to digital pixel data of known images.” The Examiner respectfully disagrees. The Examiner asserts that, at least, Craparotta et al. disclose the aforementioned disputed claim limitation, see at least page 1538 right-hand column first-full paragraph - page 1539 section 2.2, page 1538 figure 2, page 1541 left-hand column first-full paragraph - fifth-full paragraph, page 1541 figure 7, page 1542 section 4 - section 4.1 paragraph 2, page 1543 section 4.2 paragraph 1, page 1543 figure 11, page 1544 section 5 paragraph 1 and page 1545 figure 16 of Craparotta et al. wherein they disclose that “Convolutional Neural Network (CNN) is a type of feed-forward artificial NN… These systems can operate at the pixel level and learn both low-level features and high-level representations in an integrated manner”, that “CNNs have been largely used in the fashion industry, mainly in garment recognition [23] and recommendation systems [24,25], due to their capability to learn the features of cloth representation and to easily evaluate image similarity”, that a “SNN consists of two twin CNNs which accept distinct inputs and are joined by an energy function at the top. This function computes a distance between the highest-level feature representation on each side, as shown in Figure 4”, that the “aim of the SNN is to model the relation between distance di1,i2 and the similarity in the feature space (input data)”, that “input data are pairs of historical items composed by the attributes and the image of the items. Every image is then codified through a CNN. Finally, the attributes and CNN outputs are joined together and the distance between the features of the pair are compared to the sales distances”, that the “first model is by far more complex, with about 23 million parameters against 260 thousand of the second. Applying these models on 200x200 images,” that for “the energy function, there are also different possibilities. The simplest approach is to directly use a summarizing function to condensate the couple of feature vectors in the prediction. Examples of suitable functions are the L2 norm of the difference of the vectors, the cosine distance or, more generally, any distance function defined on the features space” and that the “main concept of the proposed process is based on the best practices, used in fashion companies, which consists of comparing the design, the style, the visual appearance and other technical attributes of new products to those of historical ones in order to perform sales forecasting.” The Examiner asserts that, as shown herein above and in the cited portions, Craparotta et al. disclose that their model(s) can operate at the pixel level, that their model(s) is applied on 200x200 images, that their model determines the distance, i.e., similarity, between features of an input data pair and that features of the input data pair include images of items of the input data pair. Furthermore, the Examiner asserts that at least figures 2, 4, 7 and 11 of Craparotta et al. illustrate that a target object image is compared to a group of known object images. Thus, the Examiner asserts that Craparotta et al. disclose the aforementioned disputed claim limitation at least because Craparotta et al. disclose that images composed of digital pixel data are compared. In addition, the Examiner asserts that the broadest reasonable interpretation of the aforementioned disputed claim limitation does not require that direct pixel-to-pixel image comparison is performed nor that only digital pixel data is compared between images. Therefore, the Examiner asserts that at least Craparotta et al. disclose the aforementioned disputed claim limitation. On pages 14 - 18 of the remarks the Applicant’s Representative argues that Craparotta et al. fail to teach or suggest “generating, by the machine learning model, a similarity score between the target object image and one or more known object images from the group of known object images based at least on comparing the digital pixel data of the target object image to the digital pixel data from the one or more known object images”. The Applicant’s Representative argues that Craparotta et al. fail to teach or suggest the aforementioned disputed claim limitation at least because the model of Craparotta et al. outputs a predicted sales profile distance, “not a similarity score between images.” The Examiner respectfully disagrees. The Examiner asserts that, at least, Craparotta et al. disclose the aforementioned disputed claim limitation, see at least page 1538 right-hand column first-full paragraph - page 1539 section 2.2, page 1538 figure 2, page 1539 figures 3 and 4, page 1540 section 3.1 paragraph 1, page 1541 left-hand column first-full paragraph - section 3.1.3 paragraph 1, page 1541 figure 7, page 1542 section 4.1 - page 1543 section 4.2 paragraph 1, page 1543 figure 11, pages 1544 - 1545 section 5 and page 1545 figure 16 of Craparotta et al. wherein they disclose that “Convolutional Neural Network (CNN) is a type of feed-forward artificial NN… These systems can operate at the pixel level and learn both low-level features and high-level representations in an integrated manner”, that “CNNs have been largely used in the fashion industry, mainly in garment recognition [23] and recommendation systems [24,25], due to their capability to learn the features of cloth representation and to easily evaluate image similarity”, that a “SNN consists of two twin CNNs which accept distinct inputs and are joined by an energy function at the top. This function computes a distance between the highest-level feature representation on each side, as shown in Figure 4”, that the “aim of the SNN is to model the relation between distance di1,i2 and the similarity in the feature space (input data)”, that “input data are pairs of historical items composed by the attributes and the image of the items. Every image is then codified through a CNN. Finally, the attributes and CNN outputs are joined together and the distance between the features of the pair are compared to the sales distances”, that for “the energy function, there are also different possibilities. The simplest approach is to directly use a summarizing function to condensate the couple of feature vectors in the prediction. Examples of suitable functions are the L2 norm of the difference of the vectors, the cosine distance or, more generally, any distance function defined on the features space” and that the “main concept of the proposed process is based on the best practices, used in fashion companies, which consists of comparing the design, the style, the visual appearance and other technical attributes of new products to those of historical ones in order to perform sales forecasting.” The Examiner asserts that, as shown herein above and in the cited portions, Craparotta et al. disclose that their model(s) can operate at the pixel level, that their model(s) is applied on 200x200 images, that their model determines the distance, i.e., a similarity score, between features of an input data pair and that features of the input data pair include images of items of the input data pair. The Examiner asserts that the distance between the features of the pair determined by the model of Craparotta et al. corresponds to the claimed similarity score between the target object image and one or more known object images. Furthermore, the Examiner asserts that at least figures 2, 4, 7 and 11 of Craparotta et al. illustrate that a target object image is compared to a group of known object images. Thus, the Examiner asserts that Craparotta et al. disclose the aforementioned disputed claim limitation at least because Craparotta et al. disclose utilizing a Siamese Neural Network (SNN) that compares pairs of items, input data, composed of attributes and images composed of digital pixel data to determine the distance, i.e., a similarity score, between the pairs of items. In addition, the Examiner asserts that the broadest reasonable interpretation of the aforementioned disputed claim limitation does not require that the similarity score is generated based solely on comparing the digital pixel data of the target object image to the digital pixel data from the one or more known object images. Therefore, the Examiner asserts that at least Craparotta et al. disclose the aforementioned disputed claim limitation. On pages 14 - 18 of the remarks the Applicant’s Representative argues that Craparotta et al. fail to teach or suggest “identifying, by the machine learning model, a set of similar object images based at least in part on the similarity score of the one or more known object images”. The Applicant’s Representative argues that Craparotta et al. fail to teach or suggest the aforementioned disputed claim limitation at least because the process described by Craparotta et al. “identifies historical products with similar sales profiles, not similar images.” The Examiner respectfully disagrees. The Examiner asserts that, at least, Craparotta et al. disclose the aforementioned disputed claim limitation, see at least page 1538 figure 2, page 1541 left-hand column first-full paragraph - section 3.1.3, page 1541 figure 7, page 1542 section 4.1 - page 1543 section 4.2 paragraph 1, page 1543 figure 11, pages 1544 - 1545 section 5 and page 1545 figure 16 of Craparotta et al. wherein they disclose that the “aim of the SNN is to model the relation between distance di1,i2 and the similarity in the feature space (input data)”, that “input data are pairs of historical items composed by the attributes and the image of the items. Every image is then codified through a CNN. Finally, the attributes and CNN outputs are joined together and the distance between the features of the pair are compared to the sales distances”, that for “the energy function, there are also different possibilities. The simplest approach is to directly use a summarizing function to condensate the couple of feature vectors in the prediction. Examples of suitable functions are the L2 norm of the difference of the vectors, the cosine distance or, more generally, any distance function defined on the features space”, that at “the end of this cross validation step, f SNNs have been trained and a prediction of the distance d̃i,j between for all the pairs of items i and j of I is given” and that from “these predicted distances, the selection of the best number n of nearest products is then performed.” The Examiner asserts that, as shown herein above and in the cited portions, Craparotta et al. disclose that that their model determines the distance, i.e., a similarity score, between features of an input data pair, that features of the input data pair include images of items of the input data pair and, based on the distances, i.e., similarity scores, between an input item and each of a set of historical items predicted by their model, that a number of nearest products are selected. The Examiner asserts that the distance between the features of the input data pair determined by the model of Craparotta et al. corresponds to the claimed similarity score of the one or more known object images and that historical data of the historical items of Craparotta et al. comprises images of the historical items. Furthermore, the Examiner asserts that at least figures 2, 7 and 11 of Craparotta et al. illustrate that a target object image is compared to a group of known object images and that the nearest historical products to a new product comprise images of the historical products. Thus, the Examiner asserts that Craparotta et al. disclose the aforementioned disputed claim limitation at least because Craparotta et al. disclose selecting the n nearest historical products to a new product based on distances, i.e., similarity scores, between the new product and each of a set of historical products predicted by their model and because Craparotta et al. disclose that their historical products are associated with respective images of the historical products. Therefore, the Examiner asserts that at least Craparotta et al. disclose the aforementioned disputed claim limitation. On pages 16 - 18 of the remarks the Applicant’s Representative argues that Craparotta et al. fail to teach or suggest “for each similar object image of the set of similar object images, retrieving object attributes including historical event data associated with the similar object image”. The Applicant’s Representative argues that Craparotta et al. fail to teach or suggest the aforementioned disputed claim limitation at least because Craparotta et al. do “not describe a process in which the model retrieves attributes tied to each similar image for subsequent use.” The Applicant’s Representative argues that there is no disclosure in Craparotta et al. that “after identifying ‘nearest products,’” their “system retrieves their attributes or any historical event data.” The Examiner respectfully disagrees. The Examiner asserts that, at least, Craparotta et al. disclose the aforementioned disputed claim limitation, see at least page 1538 right-hand column first-full paragraph, page 1541 section 3.1.3, page 1542 section 4.1 - page 1543 section 4.2 paragraph 1, page 1543 figure 11 and pages 1544 - 1545 section 5 of Craparotta et al. wherein they disclose that from “the above predicted distances d̃i,j between a pair of items i and j of I, the sales profile forecast can be performed for each item in I. The sales profile forecast for an item i ∈ If is defined as the average sales profile of items j in I \ If for which the predicted distance {d̃i,j}j is low. Therefore, the number n of nearest profiles to consider has to be determined. We define Lni as the set of items having the n lowest predicted distance from the sales profile of item i” and that from “these predicted distances, the selection of the best number n of nearest products is then performed. Following the process described in Section 3.1, the profile forecasts are computed for n = 4 to 60.” The Examiner asserts that, as shown herein above and in the cited portions, Craparotta et al. disclose that sales profiles, i.e., object attributes including historical event data, are retrieved for each of the n nearest products to a new product, i.e., for each similar object image. Furthermore, the Examiner asserts that at least figure 11 of Craparotta et al. illustrates that profiles, i.e., object attributes including historical event data, are retrieved for each of a set of similar object images. Therefore, the Examiner asserts that at least Craparotta et al. disclose the aforementioned disputed claim limitation. On pages 16 - 18 of the remarks the Applicant’s Representative argues that Craparotta et al. fail to teach or suggest “generating a predicted characteristic model including a predicted characteristic for the target object represented in the target object image based at least on the historical event data for ach [sic] similar object image in the set of similar object images combined with the similarity score associated with each of the given similar object images”. The Applicant’s Representative argues that Craparotta et al. fail to teach or suggest the aforementioned disputed claim limitation at least because in Craparotta et al. the “sales profile forecast for a new item is computed simply as the average of the normalized sales profiles of the nnn nearest historical products”, no “’event data’ is retrieved—only pre-cleaned sales profiles are averaged” and no “weighted contribution based on a similarity score is used”. The Examiner respectfully disagrees. Initially, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “weighted contribution based on a similarity score”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the Examiner asserts that, at least, Craparotta et al. disclose the aforementioned disputed claim limitation, see at least page 1538 right-hand column first-full paragraph, page 1541 left-hand column first-full paragraph - section 3.1.3, page 1542 section 4.1 - page 1543 section 4.2 p
Read full office action

Prosecution Timeline

May 20, 2022
Application Filed
Apr 18, 2025
Non-Final Rejection — §101, §103, §112
Aug 25, 2025
Response Filed
Nov 26, 2025
Final Rejection — §101, §103, §112
Apr 01, 2026
Request for Continued Examination
Apr 02, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month