Prosecution Insights
Last updated: April 19, 2026
Application No. 18/611,886

DIGITAL IMAGE VISUAL AESTHETIC SCORE GENERATION

Non-Final OA §101§102§103§112§DP
Filed
Mar 21, 2024
Examiner
ORANGE, DAVID BENJAMIN
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
34%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
51 granted / 151 resolved
-28.2% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
51 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
32.0%
-8.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §102 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections The below claims are objected to because of the following informalities: Claims 3-9 and 11-19 refer to their parent claims as “describing” rather than claiming (i.e., the patent claims are not descriptions). Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 (all claims) are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of each of U.S. Patent No. US 10489688 B2, US 10515443 B2, US 11069030 B2, US 11532036 B2, and US 12211129 B2 in view of the prior art as applied below. Both the pending claims and the conflicting patents are all directed to aesthetic scores of images. Therefore, all of the conflicting patents are directed to the same problem as the present application. Further, any differences between the present claims and the claims in any of the conflicting patents are obvious in view of the prior art as applied below. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the below prior art with any of the conflicting patents for implementation details (especially as the patent claims lack implementation details). Based on the findings herein, this is an example of “(A) Combining prior art elements according to known methods to yield predictable results.” MPEP 2143. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “processing device” in claims 1, 10, and 20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 (all claims) are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1-20 are rejected as a formality because the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph language in these claims does not have sufficient structure in the specification. This rejection matches the below indefiniteness rejection for the same language. Once that rejection is overcome, this one will be as well. Claim 1 recites “generating, by the processing device, an aesthetic score of the input digital image using a machine-learning model,” but this is unlimited functional claiming because of the wide variety of different architectures and ways that this could be performed. MPEP 2173.05(g). Claims 10 and 20 recite corresponding training language, and this is similarly rejected. Dependent claims are likewise rejected. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 (all claims) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation “processing device” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Here, the specification does not identify which hardware elements are the “processing device,” but rather shows box 704 on Fig. 7. This is insufficient to “clearly link or associate” the structure to the function. MPEP 2181(II)(C) and (III). Therefore, the claims are indefinite and rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claim 1 recites “receiving … an input,” but it is not clear if “input” is a distinct requirement from the receiving. For example, is “input” intended to require that it was entered by a user? Claim 1 recites “the machine-learning model trained using training digital images and user interaction data describing user interaction with the training digital images.” It is not clear if this is intended as a required method step or if this is a product-by-process claim element. If product-by-process, it is unclear what structure is implied. Claims 1, 10 and 20 recite “aesthetic score,” but this is new terminology. MPEP 2173.05(a). In particular, claim 2 recites “aesthetic score is configured to specify an amount” and claims 10 and 20 recite the same language, but without the “is.” Whether something is “configured to specify an amount” is subjective (one person might think that the color green is good, and someone else may not assign it meaning). MPEP 2173.05(b)(IV). Additionally, it is unclear if “aesthetic score” is intended to be specific to the training data, or if it applies generally. Claims 1, 10 and 20 recite “describing,” but this is subjective because different people can have different opinions as to what is meant. MPEP 2173.05(b)(IV). Claims 1, 3, 4, 9, 10, and 18-20 recite “respectively,” but it is not clear how this term is meant. https://www.dictionary.com/browse/respectively defines respectively as: Adverb (of two or more things) referring or applying to two or more things previously mentioned in a parallel or sequential way. Joe and Bob escorted Betty and Alice, respectively. in precisely the order given; sequentially. Here, none of the uses comport with the first meaning because none of the uses provide parallel lists. From context, it does not appear that the intent of “respectively” is the second meaning. Claim 4 recites “training the machine-learning model,” but it is not clear if the antecedent basis is intended to mean the machine learning model that has already been trained (i.e., fine tuning), or it includes the initial training of the model. Claims 6, 12, and 16 recite “aesthetics learning,” but this is new terminology. MPEP 2173.05(g). In particular, is this limited to the claimed bucketing of classifications? Claims 9 and 18 recite “associated,” but this is subjective because it lacks an objective relationship. MPEP 2173.05(b)(IV). Claim 10 recites “implemented by a processing device,” but it is not clear what this relationship is. For example, is the intent that the processing device generated the underlying code, that the code is stored on the processing device, or the that the processing device is presently executing the code (which would be an impermissible method step in an apparatus claim). Claim 10 recites “a training data collection module,” but this is new terminology. MPEP 2173.05(a). Additionally, the module is claimed with “to,” which is interpreted as being an intended use, with the result that there is not clarity on either what the module is or what it does. Further, it is unclear what it means to be a module that collects training data. Claim 10 also recites “a training module configured to,” but this module is also new terminology. MPEP 2173.05(a). Additionally, in light of the recent In re Blue Buffalo (Fed. Cir. January 14, 2026, non-precedential, slip opinion retrieved from https://www.cafc.uscourts.gov/opinions-orders/24-1611.OPINION.1-14-2026_2632686.pdf), it is unclear if this language is interpreted as limiting the structure, or if it instead means “capable of.” Claims 11, 12, and 14-18 recite various other modules are “configured” in various ways, and this raises the same issue. Further, each of the claimed modules are new terminology. MPEP 2173.05(a). Claim 10 recites “a training module configured to train a machine-learning model,” but this is subjective because, since the actual model is not required for the claim, different people can have different opinions about whether a given module could train an unspecified model. Claim 11 recites “learning signal extraction module,” but this is new terminology. MPEP 2173.05(a). Claim 14 recites a “machine-learning system,” but this is new terminology. MPEP 2173.05(a). Claim 20 recites “an amount of visual aesthetics exhibited by the input digital image,” but this is subjective. MPEP 2173.05(b)(IV). Claim 20 recites “collecting, by a processing device, training data,” but it is unclear what this means. Is the intent that the processing device merely loads data, does the processing device need to generate the data, does the processing device need to facilitate the user interactions? Dependent claims are likewise rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 10-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because independent claim 10’s “training module” is software per se. Additionally, as per the 112 rejection, it is unclear if “implemented by a processing device” necessitates hardware. Dependent claims are likewise rejected. Claims 1-20 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Step 1: Claims 1 and 20 (and their dependents) recite a method, and processes satisfy Step 1 of the eligibility test. Claim 10 (and its dependents) are addressed in the above rejection. Step 2A, prong one: All of the elements of claims 1-20 are a mental process because a person can look at an image and assign an aesthetic score. Further, the various models are also mental processes, see example 47, claim 2, element (d) (from the July 2024 AI subject matter eligibility examples). MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. In particular, this section begins by citing Gottschalk v. Benson, 409 US 63 (1972). “The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea.” In Benson the Supreme Court did not separately analyze the computer hardware at issue; the specifics of what hardware was claimed is only included in an appendix to the decision. Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8, 10-16, 19, and 20 (all claims except those rejected under 103 below) are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as anticipated by US20180039879A1 (“Shaji”). 1. A method comprising: receiving, by a processing device, an input digital image; (Shaji, claim 1, “receiving an image for scoring;”) generating, by the processing device, an aesthetic score of the input digital image using a machine-learning model, the machine-learning model trained using training digital images and user interaction data describing user interaction with the training digital images, respectively; and (Shaji, claim 1, “applying the machine-learned model to the received image thereby assigning an aesthetic score to the received image, wherein the learned features are inputs to the machine-learned model.” See also, claim 9 detailing the user interaction.) outputting, by the processing device, the aesthetic score. (Shaji, [0081] “a ranking may be generated, and displayed in display 602, that shows a user the best rated images.” Shaji’s ranking teaches the claimed aesthetic score.) 2. The method of claim 1, wherein the aesthetic score is configured to specify an amount of visual aesthetics exhibited by the input digital image. (Shaji, claim 1, “wherein a more aesthetically-pleasing image is given a higher aesthetic score and a less aesthetically-pleasing image is given a lower aesthetic score;”) 3. The method as described in claim 1, wherein the user interaction data describes, respectively, a number of appreciations of the training digital images and a number of views of the training digital images. (Shaji, [0068] “As another example, training images may come from known information about a user (or segment), such as, for example, recently visited or most visited images, “likes,” or other data that may be collected for a given user (or segment) indicative of a user's (or segment's) aesthetic preference.” See also, claim 9 detailing an additional user interaction.) 4. The method as described in claim 1, further comprising training the machine-learning model using training data including the training digital images and the user interaction data describing user interaction with the training digital images, respectively. (Shaji, claim 1, “updating the base neural network to generate a personalized neural network based on the received second set of training images … executing the personalized neural network on the received image to generate learned features.”) 5. The method as described in claim 4, wherein the training includes generating aesthetics classification labels as a learning signal based on the training data. (Shaji, claim 9, “receiving input from the user indicating one or more of: (i) that the user prefers an image of the one or more candidate images; (ii) that the user dislikes an image of the one or more candidate images; and (iii) that the user prefers one image over another image of the one or more candidate images.”) 6. The method as described in claim 5, wherein the generating aesthetics classification labels includes: generating a learning signal based on the training data; and (Shaji, claim 1, “updating the base neural network to generate a personalized neural network based on the received second set of training images … executing the personalized neural network on the received image to generate learned features.”) generating the aesthetics classification labels through aesthetics learning as a classification of the learning signal into respective buckets. (Shaji, claim 9, “receiving input from the user indicating one or more of: (i) that the user prefers an image of the one or more candidate images; (ii) that the user dislikes an image of the one or more candidate images; and (iii) that the user prefers one image over another image of the one or more candidate images.”) 7. The method as described in claim 4, wherein the training includes generating candidate aesthetics scores and confidence estimates of the candidate aesthetics scores. (Shaji, [0081] “The user may also receive a confidence score, which indicates the degree of certainty based on the current ranking.” Shaji’s ranking teaches the claimed aesthetics score.) 8. The method as described in claim 7, wherein the generating the candidate aesthetics scores and the confidence estimates of the candidate aesthetics scores includes: generating aesthetics classifications using a classifier; and (Shaji, [0033] “In some embodiments, a personalization layer can receive as input, the output of a multi-label, multi-class classifier”) generating the candidate aesthetics scores and the confidence estimates based on the aesthetics classifications. (Shaji, Fig. 5.) 10. A system comprising: a training data collection module implemented by a processing device to collect training data including training digital images and user interaction data describing user interaction with the training digital images, respectively; and (Shaji, claim 1, “updating the base neural network to generate a personalized neural network based on the received second set of training images … executing the personalized neural network on the received image to generate learned features.”) a training module configured to train a machine-learning model using the training data to generate an aesthetic score based on an input digital image, the aesthetic score configured to specify an amount of visual aesthetics exhibited by the input digital image. (Shaji, claim 1, “wherein a more aesthetically-pleasing image is given a higher aesthetic score and a less aesthetically-pleasing image is given a lower aesthetic score;”) 11. The system as described in claim 10, wherein the training module includes a learning signal extraction module that is configured to generate aesthetics classification labels as a learning signal based on the training data. (Shaji, claim 9, “receiving input from the user indicating one or more of: (i) that the user prefers an image of the one or more candidate images; (ii) that the user dislikes an image of the one or more candidate images; and (iii) that the user prefers one image over another image of the one or more candidate images.”) 12. The system as described in claim 11, wherein the learning signal extraction module includes: a learning signal computation module configured to generate a learning signal based on the training data; and (Shaji, claim 1, “updating the base neural network to generate a personalized neural network based on the received second set of training images … executing the personalized neural network on the received image to generate learned features.”) a discretization module configured to generate the aesthetics classification labels through aesthetics learning as a classification of the learning signal into respective buckets. (Shaji, claim 9, “receiving input from the user indicating one or more of: (i) that the user prefers an image of the one or more candidate images; (ii) that the user dislikes an image of the one or more candidate images; and (iii) that the user prefers one image over another image of the one or more candidate images.”) 13. The system as described in claim 12, wherein the learning signal is based on a number of appreciations of the training digital images and a number of views of the training digital images. (Shaji, [0068] “As another example, training images may come from known information about a user (or segment), such as, for example, recently visited or most visited images, “likes,” or other data that may be collected for a given user (or segment) indicative of a user's (or segment's) aesthetic preference.” See also, claim 9 detailing an additional user interaction.) 14. The system as described in claim 10, wherein the training module includes an aesthetic classification module that is configured to generate candidate aesthetics scores and confidence estimates of the candidate aesthetics scores. (Shaji, [0081] “The user may also receive a confidence score, which indicates the degree of certainty based on the current ranking.” Shaji’s ranking teaches the claimed aesthetics score.) 15. The system as described in claim 14, wherein the aesthetic classification module includes: a machine-learning system configured to generate aesthetics classifications using a classifier; and a (Shaji, [0033] “In some embodiments, a personalization layer can receive as input, the output of a multi-label, multi-class classifier”) calculation module configured to generate the candidate aesthetics scores and the confidence estimates based on the aesthetics classifications. (Shaji, Fig. 5.) 16. The system as described in claim 15, wherein the machine-learning system is configured to generate the aesthetics classifications based on aesthetics classification labels generated through aesthetics learning as a classification of a learning signal into respective buckets based on the training data. (Shaji, claim 9, “receiving input from the user indicating one or more of: (i) that the user prefers an image of the one or more candidate images; (ii) that the user dislikes an image of the one or more candidate images; and (iii) that the user prefers one image over another image of the one or more candidate images.”) 19. The system as described in claim 10, wherein the user interaction data describes relative amounts of user interaction with the training digital images, respectively. (Shaji, [0068] “As another example, training images may come from known information about a user (or segment), such as, for example, recently visited or most visited images, “likes,” or other data that may be collected for a given user (or segment) indicative of a user's (or segment's) aesthetic preference.” See also, claim 9 detailing an additional user interaction.) 20. A method comprising: collecting, by a processing device, training data including training digital images and user interaction data describing user interaction with the training digital images, respectively; and (Shaji, claim 1, “updating the base neural network to generate a personalized neural network based on the received second set of training images … executing the personalized neural network on the received image to generate learned features.”) training, by the processing device, a machine-learning model using the training data to generate an aesthetic score based on an input digital image, the aesthetic score configured to specify an amount of visual aesthetics exhibited by the input digital image. (Shaji, claim 1, “wherein a more aesthetically-pleasing image is given a higher aesthetic score and a less aesthetically-pleasing image is given a lower aesthetic score;”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over US20180039879A1 (“Shaji”) in view of “Cross-validation (statistics),” March 8, 2024, Wikipedia, retrieved from https://en.wikipedia.org/w/index.php?title=Cross-validation_(statistics)&oldid=1212658624 (“Wikipedia”) 9. The method as described in claim 4, wherein the training includes generating training aesthetic scores using confidence-filtered and cross-validated model predictions by: outputting candidate aesthetic scores and confidence estimates for the training images (Shaji, [0081] “The user may also receive a confidence score, which indicates the degree of certainty based on the current ranking.” Shaji’s ranking teaches the claimed aesthetics score.) generating filtered scores by filtering the candidate aesthetic scores based on the confidence estimates; (Shaji, [0081] “In some embodiments, a ranking may be generated, and displayed in display 602, that shows a user the best rated images.”) assigning aesthetics classification labels by discretizing the filtered scores into a plurality of classes associated, respectively, with a plurality of buckets; and (Shaji, claim 9, “receiving input from the user indicating one or more of: (i) that the user prefers an image of the one or more candidate images; (ii) that the user dislikes an image of the one or more candidate images; and (iii) that the user prefers one image over another image of the one or more candidate images.”) training the machine-learning model based on aesthetic scores and confidence estimates generated based on the aesthetics classification labels. (Shaji, claim 9. Claim 9 recites that the above training data is used to train.) Shaji is not relied on for the below claimed language. However, Wikipedia teaches training images that are generated using cross-validation (Wikipedia, “In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set).” See also the “Applications” section, specifying use with images.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Wikipedia to the teachings of Shaji such that Shaji’s training images are cross-validated for the purpose of “The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias[10] and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).” Wikipedia. Note that Shaji, claim 9, teaches a known dataset that is used for training. Based on the above, this is an example of “combining prior art elements according to known methods to yield predictable results.” MPEP 2143. Claims 17 and 18 are rejected as per claim 9. Conclusion The patents (and their pre-grant publications) cited for double patenting are also considered pertinent to applicant's disclosure, particularly US20190026609A1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID ORANGE/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567126
INFRASTRUCTURE-SUPPORTED PERCEPTION SYSTEM FOR CONNECTED VEHICLE APPLICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 11300964
METHOD AND SYSTEM FOR UPDATING OCCUPANCY MAP FOR A ROBOTIC SYSTEM
2y 5m to grant Granted Apr 12, 2022
Patent 10816794
METHOD FOR DESIGNING ILLUMINATION SYSTEM WITH FREEFORM SURFACE
2y 5m to grant Granted Oct 27, 2020
Patent 10433126
METHOD AND APPARATUS FOR SUPPORTING PUBLIC TRANSPORTATION BY USING V2X SERVICES IN A WIRELESS ACCESS SYSTEM
2y 5m to grant Granted Oct 01, 2019
Patent 10285010
ADAPTIVE TRIGGERING OF RTT RANGING FOR ENHANCED POSITION ACCURACY
2y 5m to grant Granted May 07, 2019
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
34%
Grant Probability
63%
With Interview (+29.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month