Prosecution Insights
Last updated: April 19, 2026
Application No. 18/420,705

DATA CREATION APPARATUS, STORAGE DEVICE, DATA PROCESSING SYSTEM, DATA CREATION METHOD, PROGRAM, AND IMAGING APPARATUS

Non-Final OA §101§103§112
Filed
Jan 23, 2024
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT JP2022 023213. Priority to JP 2021-125785 with a priority date of 30 July 2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 9 April 2024 has been considered and placed in the application file. Specification - Title The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: Automated quality information adding to images. 1st Claim Interpretation Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). Claims 6-10 recite “or.” Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history. 2nd Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a data creation apparatus that creates” in claim 16; “a learning apparatus that performs” in claim 16; “setting processing of setting” in claim 16; “creation processing of creating” in claim 16; and “learning processing of performing” in claim 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f). 3rd Claim Interpretation Claim 14 recites “the image quality information do not satisfy the setting condition.” The phrase “do not” is considered to be a negative limitation because the word “not” is exclusionary in nature. According to MPEP § 2173.05(i) “Any negative limitation or exclusionary proviso must have basis in the original disclosure.” The specification defines this phrase in paragraph [0141]. As showing a negative is not reasonable, any prior art reference that does not explicitly show the recited limitation suffices to reject the limitation. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 12 and 14 are rejected under 35 U.S.C. § 112(b) as being indefinite for claiming both an apparatus and a process of using the apparatus. When both an apparatus and a method are claimed in the same claim it is unclear whether infringement occurs when the apparatus is constructed or when the apparatus is used. Therefore the scope of the claim is indefinite. See MPEP 2173.05(p). Claim 12 claims “a data creation apparatus” then claiming “in accordance with designation from a user.” Claim 14 claims a “data creation apparatus” then claims “in a case where the additional image data is selected.” Appropriate correction is required. 1st Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is does not fit into one of the four categories of subject matter Congress deemed to be appropriate subject matter for a patent: processes, machines, manufactures and compositions of matter. If a claim covers material not found in any of the four statutory categories, that claim falls outside the plainly expressed scope of § 101 even if the subject matter is otherwise new and useful." In re Nuijten, 500 F.3d 1346, 1354, 84 USPQ2d 1495, 1500 (Fed. Cir. 2007). A machine is a "concrete thing, consisting of parts, or of certain devices and combination of devices." Digitech, 758 F.3d at 1348-49, 111 USPQ2d at 1719 (quoting Burr v. Duryee, 68 U.S. 531, 570, 17 L. Ed. 650, 657 (1863)). This category "includes every mechanical device or combination of mechanical powers and devices to perform some function and produce a certain effect or result." Nuijten, 500 F.3d at 1355, 84 USPQ2d at 1501 (quoting Corning v. Burden, 56 U.S. 252, 267, 14 L. Ed. 683, 690 (1854)). Claims 1 and 19 are not machines even as they claim “A data creation apparatus,” or “An imaging apparatus.” Further, the claim lacks parts or combination of devices. As the courts' definitions of machines, manufactures and compositions of matter indicate, a product must have a physical or tangible form in order to fall within one of these statutory categories. Digitech, 758 F.3d at 1348, 111 USPQ2d at 1719. Thus, the Federal Circuit has held that a product claim to an intangible collection of information, even if created by human effort, does not fall within any statutory category. Digitech, 758 F.3d at 1350, 111 USPQ2d at 1720 (claimed "device profile" comprising two sets of data did not meet any of the categories because it was neither a process nor a tangible product). Similarly, software expressed as code or a set of instructions detached from any medium is an idea without physical embodiment. See Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 449, 82 USPQ2d 1400, 1407 (2007); see also Benson, 409 U.S. 67, 175 USPQ2d 675 (An "idea" is not patent eligible). Dependent claims 2-15 and 20 are also rejected as depending on claim 1 or 19, also reciting a “A data creation apparatus,” or “An imaging apparatus.” embodying functional descriptive material without adding sufficient form to qualify as a statutory category. 2nd Claim Rejections - 35 USC § 101 Claims 1- 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process using images/ drawings (concept performed in a human mind, including as observation, evaluation, judgment, opinion, prediction, etc.). This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such. According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that claims 1, 16, 17 and 19 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? No, as shown above. Claim 17 is directed to a method, i.e., process, and claims 1,16 and 19 are directed to an apparatus i.e., a machine. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e., abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). The apparatus in claim 1, for example, comprises a mental process that can be practicably performed in the human mind therefore, an abstract idea. Claim 1 recites: setting condition related to identification information and to image quality information with respect to a plurality of pieces of image data… creating the training data based on selection image data in which the identification information and the image quality information satisfying the setting condition are recorded These limitations, as drafted, under their broadest reasonable interpretation, cover performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could present image(s)/ drawing(s) and record quality information about the images on a sheet of paper. The mere nominal recitation that the various steps are being executed by a processor (e.g., processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process. If a claim limitation, under its broadest reasonable interpretation, covers performance of a mental step which could be performed with a simple tool such as a pen and paper, then it falls within the “mental steps” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Thus, Claims 1- 20 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Thus, since Claims 1, 16, 17 and 19 are/is: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, Claims 1, 16, 17 and 19 are not eligible subject matter under 35 U.S.C 101. Similar analysis is made for the dependent claims 2-15, 18 and 20 and the dependent claims are similarly identified as: being directed towards an abstract idea, not reciting additional elements that integrate the judicial exception into a practical application, and not reciting additional elements that amount to significantly more than the judicial exception Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2021 0114207 A1, (Kawabata et al.). Claim 1 [AltContent: textbox (Kawabata et al. Fig. 8 showing quality parameters as metadata tags. )] PNG media_image1.png 630 899 media_image1.png Greyscale Regarding Claim 1, Kawabata et al. teach a data creation apparatus that creates training data used in machine learning from image data ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]) in which accessory information is recorded ("An image processing apparatus and method is provided which receives image data and tags the image data with a first type of tag indicative of elements in the image," paragraph [0004]) in an image in which a plurality of subjects are captured ("the tag may list a calculated probability that an image, a subject within an image, or a part of the subject within an image may be in focus," paragraph [0125] where a subject within an image teaches a plurality of subjects), the data creation apparatus being configured to execute: setting processing of setting any setting condition related to identification information and to image quality information with respect to a plurality of pieces of image data in which the accessory information including a plurality of pieces of the identification information assigned in association with the plurality of subjects and a plurality of pieces of the image quality information assigned in association with the plurality of subjects is recorded ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034]); and creation processing of creating the training data based on selection image data in which the identification information and the image quality information satisfying the setting condition are recorded ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]). It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Kawabata et al. explicitly motivates doing so at least in paragraphs [0007], [0029] and [0169] including “While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments” and otherwise motivating experimentation and optimization. The rejection of apparatus claim 1 above applies mutatis mutandis to the corresponding limitations of system claim 16, method claim 17 and apparatus claim 19 while noting that the rejection above cites to both device and method disclosures. Claims 16, 17 and 19 are mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Kawabata et al. teach the data creation apparatus according to claim 1, wherein the image quality information is information related to any of resolution of the subject in the image indicated by the image data, brightness of the subject, and noise occurring at a position of the subject ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034] where exposure is brightness, see "Exposure denotes the image's overall brightness." paragraph [0128]). Claim 3 Regarding claim 3, Kawabata et al. teach the data creation apparatus according to claim 2, wherein the image quality information is resolution information related to the resolution ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034] where sharpness teaches resolution), and the resolution information is information determined in accordance with blurriness and shake levels of the subject in the image indicated by the image data ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034] where blurriness is focus). Claim 4 Regarding claim 4, Kawabata et al. teach the data creation apparatus according to claim 2, wherein the image quality information is resolution information related to the resolution ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034] where sharpness teaches resolution), and the resolution information is resolution level information related to a resolution level of the subject in the image indicated by the image data ("the CPU 119 analyzes the image based on image data such as focus data of each pixel, distance data of each pixel," paragraph [0064] where distance data of each pixel is resolution). Claim 5 Regarding claim 5, Kawabata et al. teach the data creation apparatus according to claim 4, wherein the setting condition is a condition including an upper limit and a lower limit of the resolution level of the subject ("In this case, the user 101 designates the threshold as 0.40, then the CPU 119 converts only contents which has numerical value higher than the threshold 0.40," paragraph [0084] where threshold teaches an upper and lower limit, and "the CPU 119 analyzes the image based on image data such as focus data of each pixel, distance data of each pixel," paragraph [0064] where distance data of each pixel is resolution). Claim 6 Regarding claim 6, Kawabata et al. teach the data creation apparatus according to claim 2, wherein the image quality information is information related to the brightness of the subject or information related to the noise occurring at the position of the subject ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034] where exposure is brightness, see "Exposure denotes the image's overall brightness." paragraph [0128]), the information related to the brightness is a brightness value corresponding to the subject ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034]), and the information related to the noise is an S/N value corresponding to the subject ("For example, the user 101, via the display, can define the correction strength of noise and direction as "strong" and "down" resulting in a strong noise reduction correction being performed. The selection of the strength is not only limited to the selection of three levels, but a selection of number defining the strength may also be input by the user," paragraph [0106] where a number defining the strength of noise is a signal to noise (S/N) value). Claim 7 Regarding claim 7, Kawabata et al. teach the data creation apparatus according to claim 6, wherein the setting condition is a condition including an upper limit and a lower limit of the brightness value or an upper limit and a lower limit of the S/N value corresponding to the subject ("In this case, the user 101 designates the threshold as 0.40, then the CPU 119 converts only contents which has numerical value higher than the threshold 0.40," paragraph [0084] where threshold teaches an upper and lower limit, and "identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034] where exposure is brightness, see "Exposure denotes the image's overall brightness." paragraph [0128]). Claim 8 Regarding claim 8, Kawabata et al. teach the data creation apparatus according to claim 1, wherein the accessory information further includes a plurality of pieces of positional information assigned in association with the plurality of subjects ("To identify a composition, estimates can be made using a method such as semantic segmentation which separates an object into regions, and calculating the position and center point of each object," paragraph [0139]), and the positional information is information indicating a position of the subject in the image indicated by the image data ("To identify a composition, estimates can be made using a method such as semantic segmentation which separates an object into regions, and calculating the position and center point of each object," paragraph [0139]). Claim 9 Regarding claim 9, Kawabata et al. teach the data creation apparatus according to claim 1, which is configured to further execute: display processing of displaying an image indicated by the selection image data or a sample image having image quality satisfying the setting condition before executing the creation processing ("FIG. 17A-17C illustrate other embodiments for displaying the suggested tag list in region 1608," paragraph [0165]). Claim 10 Regarding claim 10, Kawabata et al. teach the data creation apparatus according to claim 9, wherein two or more pieces of the selection image data are selected from the plurality of pieces of image data ("In S1509, the CPU 119 aggregates all tags determined in S1505 and S1508 and determines tags to display as a final results with confidence values. In one exemplary output, the aggregation can take the output individually from S1503-S1505 which indicates that suggested tags for the input image is "A, B, C, D, E" and S1506-S1508 which indicates that suggested tags for the input image is "F, A, G, E, I". The aggregation may then compare common tag values and aggregate the suggested tags as "A, E" because both of these appeared in the individual outputs discussed above," paragraph [0159] and "Selection of at least one image causes the selected image to be displayed in region 1606," paragraph [0164]), and in the display processing, an image of a part of the selection image data among the two or more pieces of the selection image data is displayed ("FIG. 17A-17C illustrate other embodiments for displaying the suggested tag list in region 1608," paragraph [0165]). Claim 11 Regarding claim 11, Kawabata et al. teach the data creation apparatus according to claim 10, wherein in the display processing, images of the selected pieces of the selection image data are displayed based on a priority level set for each selection image data ("From there, a list of suggested tags to be associated with this image along with the confidence score that the suggested tag is correct is displayed," paragraph [0164] where a confidence score is a priority level). Claim 12 Regarding claim 12, Kawabata et al. teach the data creation apparatus according to claim 1, which is configured to further execute: determination processing of determining a purpose of the machine learning in accordance with designation from a user ("For example, the user 101, via the display, can define the correction strength of noise and direction as "strong" and "down" resulting in a strong noise reduction correction being performed. The selection of the strength is not only limited to the selection of three levels, but a selection of number defining the strength may also be input by the user," paragraph [0106] where the purpose is noise reduction), wherein in the setting processing, the setting condition corresponding to the purpose is set ("Then, the correction is performed from the Tn=l," paragraph [0107]). Claim 13 Regarding claim 13, Kawabata et al. teach the data creation apparatus according to claim 1, which is configured to further execute: determination processing of determining a purpose of the machine learning in accordance with designation from a user ("For example, the user 101, via the display, can define the correction strength of noise and direction as "strong" and "down" resulting in a strong noise reduction correction being performed. The selection of the strength is not only limited to the selection of three levels, but a selection of number defining the strength may also be input by the user," paragraph [0106] where the purpose is noise reduction), wherein in the setting processing, the setting condition corresponding to the purpose is suggested to the user before setting the setting condition ("From there, a list of suggested tags to be associated with this image along with the confidence score that the suggested tag is correct is displayed," paragraph [0164]). Claim 14 Regarding claim 14, Kawabata et al. teach the data creation apparatus according to claim 1, which is configured to further execute: suggestion processing of suggesting an additional condition different from the setting condition to a user ("From there, a list of suggested tags to be associated with this image along with the confidence score that the suggested tag is correct is displayed," paragraph [0164]), wherein the additional condition is a condition set with respect to the accessory information ("From there, a list of suggested tags to be associated with this image along with the confidence score that the suggested tag is correct is displayed," paragraph [0164]), additional image data is selected under the additional condition from non-selection image data of which the identification information and the image quality information do not satisfy the setting condition ("To associate the tags with the image in region 1606, user may select tagging icon 1610 which causes association processing to associate the tag with the image," paragraph [0164]), and in a case where the additional image data is selected, the training data is created in the creation processing based on the selection image data and on the additional image data ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]). Claim 15 Regarding claim 15, Kawabata et al. teach a storage device that stores the plurality of pieces of image data to be used for creating the training data via the data creation apparatus according to claim 1 ("The recording medium 104 may be a memory card for storing captured image data and may be considered storage device," paragraph [0035]). Claim 16 Regarding claim 16, Kawabata et al. teach a data processing system comprising: a data creation apparatus that creates training data from image data ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]) in which accessory information is recorded ("An image processing apparatus and method is provided which receives image data and tags the image data with a first type of tag indicative of elements in the image," paragraph [0004]) in an image in which a plurality of subjects are captured ("the tag may list a calculated probability that an image, a subject within an image, or a part of the subject within an image may be in focus," paragraph [0125]); and a learning apparatus that performs machine learning using the training data, the data processing system being configured to execute: setting processing of setting any setting condition related to identification information and to image quality information with respect to a plurality of pieces of image data in which the accessory information including a plurality of pieces of the identification information assigned in association with the plurality of subjects and a plurality of pieces of the image quality information assigned in association with the plurality of subjects is recorded ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034]); creation processing of creating the training data based on selection image data in which the identification information and the image quality information satisfying the setting condition are recorded ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]); and learning processing of performing the machine learning using the training data ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]). Claim 17 Regarding claim 17, Kawabata et al. teach a data creation method of creating training data used in machine learning from image data ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]) in which accessory information is recorded in an image ("An image processing apparatus and method is provided which receives image data and tags the image data with a first type of tag indicative of elements in the image," paragraph [0004]) in which a plurality of subjects are captured ("the tag may list a calculated probability that an image, a subject within an image, or a part of the subject within an image may be in focus," paragraph [0125] where a subject within an image teaches a plurality of subjects), the data creation method comprising: a setting step of setting any setting condition related to identification information and to image quality information with respect to a plurality of pieces of image data in which the accessory information including a plurality of pieces of the identification information assigned in association with the plurality of subjects and a plurality of pieces of the image quality information assigned in association with the plurality of subjects is recorded ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034]); and a creation step of creating the training data based on selection image data in which the identification information and the image quality information satisfying the setting condition are recorded ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]). Claim 18 Regarding claim 18, Kawabata et al. teach a program causing a computer to function as the data creation apparatus according to claim 1, the program causing the computer to execute each of the setting processing and the creation processing ("FIG. lA shows a schematic diagram of a system where a user 101 edits, stores and/or organizes image data through an application executing on a local PC 102," paragraph [0034]). Claim 19 Regarding claim 19, Kawabata et al. teach an imaging apparatus ("level of focus and the corresponding image will be used as training datasets to generate the model used by the image recognition engine discussed above," paragraph [0125]) that executes: imaging processing of capturing an image in which a plurality of subjects are captured ("the tag may list a calculated probability that an image, a subject within an image, or a part of the subject within an image may be in focus," paragraph [0125] where a subject within an image teaches a plurality of subjects); and generation processing of generating image data by recording accessory information in the image ("An image processing apparatus and method is provided which receives image data and tags the image data with a first type of tag indicative of elements in the image," paragraph [0004]), wherein the accessory information includes a plurality of pieces of identification information assigned in association with the plurality of subjects and a plurality of pieces of image quality information assigned in association with the plurality of subjects ("identify objects on the image data, but also evaluate quantitative features related to image quality such as sharpness, focus, tilt, noise, exposure, dynamic range, aberration, diffraction distortion and vignetting," paragraph [0034]). Claim 20 Regarding claim 20, Kawabata et al. teach the imaging apparatus according to claim 19, wherein the accessory information is information for selecting selection image data to be used for creating training data for machine learning ("This assessment is done by the image recognition engine 121a using a trained model trained by machine learning which is built beforehand," paragraph [0083]). Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2021 0256307 A1 to Papli discloses utilize video data to provide the necessary number of images and view angles needed to train a machine learning product detection/recognition system to recognize a specific product within later provided images. In various embodiments, a user may provide video data and the video data may be transformed in a manner that may aid in training of the machine learning system. Non Patent Publication “An easy-to-use image labeling platform for automatic magnetic resonance image quality assessment” to Kustner et al. discloses model observers (MO) which mimic the human visual system can help to support the Human Observers (HO) during this reading process or can provide feedback to the magnetic resonance (MR) scanner and/or HO about the derived image quality. For this purpose MOs are trained on HO-derived image labels with respect to a certain diagnostic task. We propose a non-reference image quality assessment system based on a machine-learning approach with a deep neural network and active learning to keep the amount of needed labeled training data small. A labeling platform is developed as a web application with accounted data security and confidentiality to facilitate the HO labeling procedure.. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 15 January 2026
Read full office action

Prosecution Timeline

Jan 23, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month