DETAILED ACTION
Claims 1-20 are pending in this application and have been examined under the priority date 07/30/2021 in accordance with the applicant’s claim for foreign priority.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/17/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“Setting step” of claim 13.
“Selection step” of claim 13.
“Suggestion step” of claim 13.
“Creation step” of claim 13.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because they are drawn to an abstract idea or mental process, without components which meaningfully translate the claims into practical application or significantly more.
Regarding claims 1 and 13, the following claim limitations are directed to one of an abstract idea, mental process or steps of mere data gathering without significantly more;
“A data creation apparatus that creates training data for performing machine learning from a plurality of pieces of image data in which accessory information is recorded, the data creation apparatus comprising: (Step of mere data gathering in which a human could reasonably collect a set of images, and as a mental process determine the accessory information by looking at the image)
a processor, wherein the processor is configured to execute:
setting processing of setting a first condition for selecting first selection image data based on the accessory information from the plurality of pieces of image data; (Mental process in which a human could determine a condition used to select images by looking at the image)
selection processing of selecting the first selection image data in which the accessory information conforming to the first condition is recorded from the plurality of pieces of image data; (Mental process in which the image can be selected and the data can be manually recorded by a human)
suggestion processing of suggesting a second condition for selecting second selection image data based on the accessory information from non-selection image data that does not conform to the first condition among the plurality of pieces of image data; (Mental process in which the image can be selected and the data can be manually recorded by a human)
and creation processing of creating the training data based on the first selection image data in a case where a user has not employed the second condition (Step of mere data gathering in which a human could select a set of images to train a model and determine by looking at the images which conditions apply)
and creating the training data based on the first selection image data and on the second selection image data in a case where the user has employed the second condition. (Step of mere data gathering in which a human could select a set of images to train a model and determine by looking at the images which conditions apply)”.
The above limitations are steps which could practically be performed as a mental process or step of mere data gathering performed by a human under step 2A prong 1 (MPEP 2106). Under step 2A prong 2, the claim recites the additional elements of “A data creation apparatus” and “a processor”, which fail to translate the steps into practical application or amount to significantly more.
Dependent claims 2-12, and 14-20 follow the same logic and do not limitations that that further translate the claims into practical application or amount to significantly more.
Regarding claim 2, claim 2 recites the limitations; “wherein the processor is configured to, in a case where the user has employed the second condition, execute second selection processing of selecting the second selection image data in which the accessory information conforming to the second condition is recorded from the non-selection image data.”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more. The claim recites the additional element of “a processor”, which fails to translate the steps into practical application or amount to significantly more.
Regarding claims 3 and 16, claims 3 and 16 recite the limitations; “wherein the processor is configured to execute the machine learning based on an employment result of whether or not the user has employed the second condition, and in the suggestion processing, the second condition is suggested based on the machine learning of the employment result. (Step of mere data gathering where a human can look at an image, see if the condition is met, and run a program)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more. The claim recites the additional element of “a processor”, which fails to translate the steps into practical application or amount to significantly more.
Regarding claim 4 and 17, claims 4 and 17 recite the limitations; “wherein the processor is configured to execute notification processing of providing notification of information related to the second condition. (step of mere data gathering where information is gathered and sent via message about a determined condition)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more. The claim recites the additional element of “a processor”, which fails to translate the steps into practical application or amount to significantly more.
Regarding claim 5 and 18, claims 5 and 18 recite the limitations; “wherein the first condition and the second condition include an item related to the accessory information and content related to the item. (Mental process of looking at an image and determining conditions related to observed information)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 6, claim 6 recites the limitations; “wherein the first condition and the second condition have the same item and different content.”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 7, claim 7 recites the limitations; “wherein the item is availability information related to use of image data as the training data.”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 8, claim 8 recites the limitations; “wherein the availability information includes at least one of user information related to use of the image data, restriction information related to restriction of an aim of use of the image data, or copyright holder information of the image data.”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 9, claim 9 recites the limitations; “wherein the content of the first condition is content of selecting image data based on the availability information, and the content of the second condition is content of selecting image data in which the availability information is not recorded or image data in which the availability information indicating that there is no restriction on use of the image data is recorded.”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 10, claim 10 recites the limitations; “wherein the item is an item related to a type of a subject captured in an image based on image data.”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claims 11 and 19, claims 11 and 19 recite the limitations; “wherein the first condition is a condition related to a subject captured in an image based on image data, (Mental process of determining a condition based on looking at an image and seeing what the image is of)
and the suggestion processing is processing of suggesting the second condition based on a feature of the subject of the first condition. (Mental process of determining a condition based on looking at an image and seeing what the image is of)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 12 and claim 20, claims 12 and 20 recite the limitations; “wherein the suggestion processing is processing of suggesting the second condition of a higher-level concept obtained by making the first condition more abstract. (Mental process in which a human could generate a suggest based on a first condition)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more.
Regarding claim 14, claim 14 recites the limitations; “A program causing a computer to execute each processing of the data creation apparatus according to claim 1. (Mere data gathering in which a human can manually execute a program on a computer)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more. The claim recites the additional elements of “a computer”, “ a data creation apparatus” and “a program”, which fail to translate the steps into practical application or amount to significantly more.
Regarding claim 15, claim 15 recites the limitations; “A computer readable recording medium on which a program causing a computer to execute each processing of the data creation apparatus according to claim 1 is recorded. (Mere data gathering in which a human can manually execute a program on a computer)”
The above limitations are recited with a high level of generality and are drawn to a mental process or steps of mere data gathering without significantly more. The claim recites the additional elements of “computer readable recording medium”, “ a computer” and “a program”, which fail to translate the steps into practical application or amount to significantly more.
Claim 14 is rejected under 35 U.S.C. 101 because it includes the limitation of “program”.
Claim 14 is directed to a program per se. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program’s functionality to be realized. In contrast, a claimed computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program’s functionality to be realized, and is thus statutory. See Lowry, 32 F.3d at 1583-84, 32 USPQ2d at 1035. Since a computer program is merely a set of instructions capable of being executed by a computer, the computer program, per se, is nonstatutory.
Claim 15 is rejected under 35 U.S.C. 101 because it includes the limitation of “computer readable recording medium”.
Claim 15 recites the limitation of “computer readable recording medium”, which is defined in the specification [0022] and [0145] to include “…the present invention provides a computer readable recording medium on which a program causing a computer to execute each processing of any of the data creation apparatuses. ([0022])” and “…the method according to the embodiment of the present invention can be executed by a program causing a computer to execute each step of the method. In addition, a computer readable recording medium on which the program is recorded can also be provided ([0145])”. The provided definitions do not disavowal the claimed computer readable recording medium to include transitory propagating signals per se, since the phase “in at least one embodiment” implies that the machine-readable medium is not always non-transitory for each and every embodiment disclose.
The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C 101 as covering non-statutory subject matter. The claims, as defined in the specification, cover both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments by adding the limitation "non-transitory" to the claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Haneda (US 20190197359 A1).
Regarding claim 1 Haneda discloses; A data creation apparatus that creates training data for performing machine learning from a plurality of pieces of image data in which accessory information is recorded (Haneda, abstract, the system is a learning device that takes in photographs and generates training data to train a model), the data creation apparatus comprising:
a processor, wherein the processor is configured to execute (Haneda, [0046] the system has a processor which receives and processes the images, as well as executes machine learning and training functions):
setting processing of setting a first condition for selecting first selection image data based on the accessory information from the plurality of pieces of image data (Haneda, [0125] the system has a setting condition section where information such as shooting information, focal length, exposure, and conditions of the accessory which is analogous to the accessory information described in figure 6 of the applicant’s specification, where the accessory information is defined as information about imaging condition, subject info/object info, image quality, availability, history and purpose);
selection processing of selecting the first selection image data in which the accessory information conforming to the first condition is recorded from the plurality of pieces of image data (Haneda, [0030] the interference engine matches an image’s setting information with a request from a user (first condition), where the setting information contains information about the image analogous to the accessory information);
suggestion processing of suggesting a second condition for selecting second selection image data based on the accessory information from non-selection image data that does not conform to the first condition among the plurality of pieces of image data (Haneda, [0143] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user them request (first condition));
and creation processing of creating the training data based on the first selection image data in a case where a user has not employed the second condition (Haneda, [0048] in a situation where the image data has been determined as “like” or “correct (meeting the first request or condition) that data is used as training, if the first condition is satisfied the user will not have employed the second condition)
and creating the training data based on the first selection image data and on the second selection image data in a case where the user has employed the second condition (Haneda, [0048] images not evaluated as being correct may also be input as training data (first and second conditions) in and event where the second set of images is similar to the first set, but not an exact match (the second condition is employed as described in [0142]-[0144] above)).
Regarding claim 2 Haneda discloses; The data creation apparatus according to claim 1, wherein the processor is configured to,
in a case where the user has employed the second condition, execute second selection processing of selecting the second selection image data in which the accessory information conforming to the second condition is recorded from the non-selection image data (Haneda, [0144] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user theme request (first condition), additionally, in a situation where the images the user has selected are not likely to be liked by the target audience (first condition not met) a set of images which would likely meet the criteria of being liked will be selected based on the inference information (second condition is used to select images which conform to the condition)).
Regarding claim 3 Haneda discloses; The data creation apparatus according to claim 1, wherein the processor is configured to execute the machine learning based on an employment result of whether or not the user has employed the second condition, and in the suggestion processing, the second condition is suggested based on the machine learning of the employment result (Haneda, [0144] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user theme request (first condition), [0146] and [0147] after the camera shooting/image capture advice is suggested (second condition for image capture), it can be determined whether or not the advice has been adopted (second suggestion has been employed) and if the image may be captured, further, as described in [0147] the advice suggested (second condition suggesting) that is not taken into account (employment of the second suggestion) may be taken into consideration by the machine learning system as advice that is of no use to the user, and may not be suggested in the future).
Regarding claim 4 Haneda discloses; The data creation apparatus according to claim 1, wherein the processor is configured to execute notification processing of providing notification of information related to the second condition (Haneda, [0144] the user is displayed with the suggestion/ advice information (second condition suggestion)).
Regarding claim 5 Haneda discloses; The data creation apparatus according to claim 1, wherein the first condition and the second condition include an item related to the accessory information and content related to the item (Haneda, [0125] the system has a setting condition section where information such as shooting information, focal length, exposure, and conditions of the accessory which is analogous to the accessory information described in figure 6 of the applicant’s specification, where the accessory information is defined as information about imaging condition, subject info/object info, image quality, availability, history and purpose, and [0144] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user theme request (first condition)).
Regarding claim 6 Haneda discloses; The data creation apparatus according to claim 5, wherein the first condition and the second condition have the same item and different content (Haneda, [0144] the user may be trying to capture optimal images of an item, if the images captured will not receive the determination of “good” or “liked” by the target audience (first condition) the system generates advice for the user (second condition) to capture the same item in a way which likely be liked by the target audience, both conditions pertain to better tailoring the image to a preferred theme (item)).
Regarding claim 7 Haneda discloses; The data creation apparatus according to claim 6, wherein the item is availability information related to use of image data as the training data (Haneda, [0144] the user may be trying to capture optimal images of an item, if the images captured will not receive the determination of “good” or “liked” by the target audience (first condition) the system generates advice for the user (second condition) to capture the same item in a way which likely be liked by the target audience, both conditions pertain to better tailoring the image to a preferred theme (item), where per [0021] the theme is the object being captured, and the target audience, and requirements for the photograph, [0024] the theme (item/availability information) is used to help train the inference model to infer types of images that fit that theme (Use of the image data in training)).
Regarding claim 8 Haneda discloses; The data creation apparatus according to claim 7, wherein the availability information includes at least one of user information related to use of the image data, restriction information related to restriction of an aim of use of the image data, or copyright holder information of the image data (Haneda, [0144] the user may be trying to capture optimal images of an item, if the images captured will not receive the determination of “good” or “liked” by the target audience (first condition) the system generates advice for the user (second condition) to capture the same item in a way which likely be liked by the target audience, both conditions pertain to better tailoring the image to a preferred theme (item), where per [0021] the theme is the object being captured, and the target audience, and requirements for the photograph, [0024] the theme (item/availability information) is used to help train the inference model to infer types of images that fit that theme (Use of the image data in training), where the disclosure of the theme to the user constitutes user information related to use of the image data, further [0022] notes that the user is present with a menu of the demographic information for the photo and theme, which further provides information to the user on the use of the image data and restriction of use because the user’s data is being restricted to a certain demographic target audience).
Regarding claim 9 Haneda discloses; The data creation apparatus according to claim 7, wherein the content of the first condition is content of selecting image data based on the availability information (Haneda, [0030] the interference engine matches an image’s setting information with a request from a user (first condition), where the setting information contains information about the image analogous to the accessory information, [0026] where the setting condition may include a theme, where the theme (item/availability information) is used to help train the inference model to infer types of images that fit that theme),
and the content of the second condition is content of selecting image data in which the availability information is not recorded or image data in which the availability information indicating that there is no restriction on use of the image data is recorded (Haneda, [0049] scene/theme (item) determination may be omitted depending the user’s preference (second condition), where the scene/theme is the limiting item/availability information which is being omitted, indicating no restriction on the data).
Regarding claim 10 Haneda discloses; The data creation apparatus according to claim 6, wherein the item is an item related to a type of a subject captured in an image based on image data (Haneda, [0037] the scene/theme (item) is the subject of the image, for example the them may be “Paris” where the image is of people and scenery falling into that category).
Regarding claim 11 Haneda discloses; The data creation apparatus according to claim 1, wherein the first condition is a condition related to a subject captured in an image based on image data (Haneda, [0030] the interference engine matches an image’s setting information with a request from a user (first condition), where the setting information contains information about the image analogous to the accessory information, [0026] where the setting condition may include a theme, where the theme (item/availability information/subject captured) is used to help train the inference model to infer types of images that fit that theme),
and the suggestion processing is processing of suggesting the second condition based on a feature of the subject of the first condition (Haneda, [0144] the user may be trying to capture optimal images of an item, if the images captured will not receive the determination of “good” or “liked” by the target audience (first condition) the system generates advice for the user (second condition) to capture the same item in a way which likely be liked by the target audience, both conditions pertain to better tailoring the image to a preferred theme (item), [0170] the advice generated may be based on a feature of the item).
Regarding claim 12 Haneda discloses; The data creation apparatus according to claim 1, wherein the suggestion processing is processing of suggesting the second condition of a higher-level concept obtained by making the first condition more abstract (Haneda, [0056] the advice/guidance information (second condition/suggestion) is advice for capturing images that will be evaluated highly based upon the theme/user request (first condition), this can be general advice for the adjustment of image capturing conditions, or advice on whether it is even possible for the image to receive good evaluations for fitting the theme/request specifications).
Regarding claim 13 Haneda discloses; A data creation method of creating training data for performing machine learning from a plurality of pieces of image data in which accessory information is recorded, the data creation method comprising (Haneda, abstract, the system is a learning device that takes in photographs and generates training data to train a model):
a setting step of setting a first condition for selecting first selection image data based on the accessory information from the plurality of pieces of image data (Haneda, [0125] the system has a setting condition section where information such as shooting information, focal length, exposure, and conditions of the accessory which is analogous to the accessory information described in figure 6 of the applicant’s specification, where the accessory information is defined as information about imaging condition, subject info/object info, image quality, availability, history and purpose);
a selection step of selecting the first selection image data in which the accessory information conforming to the first condition is recorded from the plurality of pieces of image data (Haneda, [0030] the interference engine matches an image’s setting information with a request from a user (first condition), where the setting information contains information about the image analogous to the accessory information);
a suggestion step of suggesting a second condition for selecting second selection image data based on the accessory information from non-selection image data that does not conform to the first condition among the plurality of pieces of image data (Haneda, [0143] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user them request (first condition));
and a creation step of creating the training data based on the first selection image data in a case where a user has not employed the second condition (Haneda, [0048] in a situation where the image data has been determined as “like” or “correct (meeting the first request or condition) that data is used as training, if the first condition is satisfied the user will not have employed the second condition)
and creating the training data based on the first selection image data and on the second selection image data in a case where the user has employed the second condition (Haneda, [0048] images not evaluated as being correct may also be input as training data (first and second conditions) in and event where the second set of images is similar to the first set, but not an exact match (the second condition is employed as described in [0142]-[0144] above)).
Regarding claim 14 Haneda discloses; A program causing a computer to execute each processing of the data creation apparatus according to claim 1 (Haneda, [0196] the system has a non-transitory computer readable medium causing a computer to execute the method).
Regarding claim 15 Haneda discloses; A computer readable recording medium on which a program causing a computer to execute each processing of the data creation apparatus according to claim 1 is recorded (Haneda, [0196] the system has a non-transitory computer readable medium causing a computer to execute the method).
Regarding claim 16 Haneda discloses; The data creation apparatus according to claim 2, wherein the processor is configured to execute the machine learning based on an employment result of whether or not the user has employed the second condition, and in the suggestion processing, the second condition is suggested based on the machine learning of the employment result (Haneda, [0144] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user theme request (first condition), [0146] and [0147] after the camera shooting/image capture advice is suggested (second condition for image capture), it can be determined whether or not the advice has been adopted (second suggestion has been employed) and if the image may be captured, further, as described in [0147] the advice suggested (second condition suggesting) that is not taken into account (employment of the second suggestion) may be taken into consideration by the machine learning system as advice that is of no use to the user, and may not be suggested in the future).
Regarding claim 17 Haneda discloses; The data creation apparatus according to claim 2, wherein the processor is configured to execute notification processing of providing notification of information related to the second condition (Haneda, [0144] the user is displayed with the suggestion/ advice information (second condition suggestion)).
Regarding claim 18 Haneda discloses; The data creation apparatus according to claim 2, wherein the first condition and the second condition include an item related to the accessory information and content related to the item (Haneda, [0125] the system has a setting condition section where information such as shooting information, focal length, exposure, and conditions of the accessory which is analogous to the accessory information described in figure 6 of the applicant’s specification, where the accessory information is defined as information about imaging condition, subject info/object info, image quality, availability, history and purpose, and [0144] if the condition is not a match for certain images, a set of conditions are generated and presented as advice (second condition is suggested) for image shooting/capture conditions which would result in the image matching the user theme request (first condition)).
Regarding claim 19 Haneda discloses; The data creation apparatus according to claim 2, wherein the first condition is a condition related to a subject captured in an image based on image data (Haneda, [0030] the interference engine matches an image’s setting information with a request from a user (first condition), where the setting information contains information about the image analogous to the accessory information, [0026] where the setting condition may include a theme, where the theme (item/availability information/subject captured) is used to help train the inference model to infer types of images that fit that theme),
and the suggestion processing is processing of suggesting the second condition based on a feature of the subject of the first condition (Haneda, [0144] the user may be trying to capture optimal images of an item, if the images captured will not receive the determination of “good” or “liked” by the target audience (first condition) the system generates advice for the user (second condition) to capture the same item in a way which likely be liked by the target audience, both conditions pertain to better tailoring the image to a preferred theme (item), [0170] the advice generated may be based on a feature of the item).
Regarding claim 20 Haneda discloses; The data creation apparatus according to claim 2, wherein the suggestion processing is processing of suggesting the second condition of a higher-level concept obtained by making the first condition more abstract (Haneda, [0056] the advice/guidance information (second condition/suggestion) is advice for capturing images that will be evaluated highly based upon the theme/user request (first condition), this can be general advice for the adjustment of image capturing conditions, or advice on whether it is even possible for the image to receive good evaluations for fitting the theme/request specifications).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous prior art as determined by the examiner please see the attached PTO 892 Notice of References of Cited.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666