DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a PCT/JP2022/023225. Priority to Japan JP2021-125772 with a priority date of 07/30/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Copies of certified papers required by 37 CFR 1.55 have been retrieved.
Information Disclosure Statement
The IDS dated 04/15/2024 have been considered and placed in the application file.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the
plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification.
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claim 1 recite “or ” then listing “first information related to the machine learning or to second information related to a creator of the image data, a creator of the accessory information, or a right holder of the image data”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim 10 recite “or ” then listing “first setting condition related to the first information or to the second information and any second setting condition”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim 11 recite “or ” then listing “image processing performed with respect to the image by the apparatus, or an imaging environment of the image”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim 14 recite “or ” then listing “first information related to the machine learning or to second information related to a creator of the image data, a creator of the accessory information, or a right holder of the image data”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim 15 recite “or ” then listing “first information related to the machine learning or to second information related to a creator of the image data, a creator of the accessory information, or a right holder of the image data”. Since “and/or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 UFR 3.73(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-16 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims of co-pending Application No. 18/420,705. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are narrower in every aspect than the claims in the above-listed reference application and are therefore obvious variants thereof.
This is a provisional nonstatutory obviousness-type double patenting rejection because the patentably indistinct claims have not in fact been patented.
For example, the following is a chart comparing claim 1 of the instant application to the claim 1 of the co-pending application number 18/420,705:
Instant application: 18/425,435
U.S. Application No: 18/420,705
Claim 1: A data creation apparatus that creates training data used in machine learning from image data in which accessory information is recorded, the data creation apparatus being configured to execute:
setting processing of setting any setting condition related to first information related to the machine learning or to second information related to a creator of the image data, a creator of the accessory information, or a right holder of the image data with respect to a plurality of pieces of the image data in which the accessory information including the first information or the second information is recorded; and
creation processing of creating the training data based on selection image data in which the first information or the second information satisfying the setting condition is recorded.
Claim 1: A data creation apparatus that creates training data used in machine learning from image data in which accessory information is recorded in an image in which a plurality of subjects are captured, the data creation apparatus being configured to execute:
setting processing of setting any setting condition related to identification information and to image quality information with respect to a plurality of pieces of image data in which the accessory information including a plurality of pieces of the identification information assigned in association with the plurality of subjects and a plurality of pieces of the image quality information assigned in association with the plurality of subjects is recorded; and
creation processing of creating the training data based on selection image data in which the identification information and the image quality information satisfying the setting condition are recorded.
Although co-pending application 18/420,705 discloses “related to identification information and to image quality information” which can be interpreted as “first information related to the machine learning or to second information related to a creator” and “identification information and the image quality information” which can be interpreted as “the first information or the second information” it does not explicitly disclose “in which a plurality of subjects are captured” and “pieces of the identification information assigned in association with the plurality of subjects”. However, in an analogous field of endeavor Haneda discloses “in which a plurality of subjects are captured” (Haneda ¶0096 discloses classifying and storing a plurality of objects in the image) and “pieces of the identification information assigned in association with the plurality of subjects” (Haneda ¶0096 discloses classifying the images based on the object itself such as a cat and ¶0303 discloses attaching identification information to the file creation of the image). Accordingly, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the limitations of claim 1 of the co-pending application 18/420,705 with the teachings of Haneda to use identification information to assign identities to the multiple subjects in an image. One of ordinary skill in the art would be motivated to combine the limitations of claim 1 of the co-pending application 18/420,705 with the Haneda reference in order to have that “it is desired to generate training data for an actual scene that changes minute by minute, and to perform maintenance of a learning model.” as disclosed by Haneda in ¶0004. Therefore, it would have been obvious to combine the limitations of claim 1 of the co-pending application 18/420,705 and Haneda to obtain the invention of the instant claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 16 is rejected under 35 U.S.C. 101 because the claims appear to be directed to a software embodiment and not to hardware embodiment, where a machine claim is directed towards a system, apparatus, or arrangement. The claim appears to be directed towards a software embodiment. Paragraphs [0018], and [0033]-[0037] of the Published Specification describes the elements of the system being implemented as software alone actualizing the embodiments of the invention. The claimed limitations are capable of being performed as software as described in the above paragraphs, alone since no hardware component is being claimed. Software, alone, are not physical components and thus are not statutory since software do not define any structural and functional interrelationships between the computer programs and other claimed elements of a computer, which permit the computer' s program functionality to be realized. Hence, the stated functions comprise software and is thus not directed to a hardware embodiment. Data structures not claimed as embodied in computer readable media are descriptive material per se and are not statutory because they are not capable of causing functional change in the computer. See e.g., Warmerdam, 33 F.3d at 1361, 31, USPQ2d at 1760 (claim to a data structure per se held non-statutory). Such claimed data structures do not define any structural and functional interrelationships between data and other claimed aspects of the invention, which permit the data structure' s functionality to be realized. In contrast, a claimed computer readable medium encoded with a data structure defines structural and functional interrelationships between the data structure and the computer software and hardware components which permit the data structure' s functionality to be realized, and is thus statutory.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1- 4, 6, and 8-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Haneda et al. (US Patent Publication 20200242154 A1).
Regarding Claim 1, Haneda teaches a data creation apparatus (Haneda ¶0005, disclose an image file generation device) that creates training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data) used in machine learning from image data (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model) in which accessory information is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved), the data creation apparatus (Haneda ¶0005, disclose an image file generation device) being configured to execute:
setting processing of setting any setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) related to first information related to the machine learning (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or to second information related to a creator of the image data (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data), a creator of the accessory information (Haneda ¶0085 and Fig 3A 100 and discloses information about the camera creating metadata based on the captured images), or a right holder of the image data (Haneda ¶0098 discloses copyright and user rights in association with the training data) with respect to a plurality of pieces of the image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information is recorded (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data); and
creation processing of creating the training data (Haneda Fig 2 302 and ¶0068- ¶0070 discloses a creation section where the training data is created) based on selection image data (Haneda ¶0093, ¶0167, ¶0206, Fig 4 and Fig 19B discloses selection of images and metadata selection) in which the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data) satisfying the setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved).
Regarding Claim 2, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
acquisition processing of acquiring (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved),
wherein the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) is permission information related to permission to use (Haneda ¶0197, ¶0215, and ¶0226 discloses rights to images and if the images are used in the training data and if the data used is a security concern) and the image data in creating the training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data) in the machine learning (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model).
Regarding Claim 3, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 2,
wherein the permission information (Haneda ¶0197, ¶0215, and ¶0226 discloses rights to images and if the images are used in the training data and if the data used is a security concern) includes information related to a person (Haneda ¶0098 discloses the copyright and portrait rights may be related to the users own personal needs) related to the permission for the image data (Haneda ¶0197, ¶0215, and ¶0226 discloses rights to images and if the images are used in the training data and if the data used is a security concern).
Regarding Claim 4, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
acquisition processing of acquiring (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved),
wherein the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) is history information (Haneda Fig 21 Item 14 and Fig 22 S207 discloses history information) related to a history of use (Haneda ¶0317 discloses history information based off use in a previous model) as the training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data) in machine learning (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model) in the past.
Regarding Claim 6, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
acquisition processing of acquiring (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved),
wherein the first information is purpose information (Haneda ¶0183 and ¶0188 discloses purpose information) related to a purpose of the machine learning (Haneda ¶0188 discloses the annotations including the purpose of the training data in the image).
Regarding Claim 8, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
acquisition processing of acquiring (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) further including learning information (Haneda ¶0280 discloses learning the event that occurs in the image and ¶0009 and Fig 11 discloses a learning section) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved),
wherein the learning information (Haneda ¶0280 discloses learning the event that occurs in the image and ¶0009 and Fig 11 discloses a learning section) is information related to a subject in an image recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) in the image data (Haneda ¶0276 and ¶0280 discloses using the annotations for learning the object and the character tics of the object in the image).
Regarding Claim 9, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 8,
wherein in the acquisition processing, (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the second information is recorded are acquired (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data), and
the second information is creator information (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data) related to a creator (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data) of the learning information (Haneda ¶0280 discloses learning the event that occurs in the image and ¶0009 and Fig 11 discloses a learning section).
Regarding Claim 10, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
acquisition processing of acquiring (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including imaging condition information (Haneda ¶0172 discloses the conditions of a photographed object) related to an imaging condition of an image is recorded (Haneda ¶0172 discloses the conditions being the conditions under which the image was taken),
wherein in the setting processing (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section), each of any first setting condition (Haneda ¶0089 discloses the multiple setting conditions of the camera that captures the images) related to the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or to the second information and any second setting condition (Haneda ¶0089 discloses the multiple setting conditions of the camera that captures the images including the setting relating to the shooting mode which is related to the image conditions) related to the imaging condition information is set (Haneda ¶0172 discloses the conditions of a photographed object), and
in the creation processing, the training data is created (Haneda Fig 2 302 and ¶0068- ¶0070 discloses a creation section where the training data is created) based on the selection image data (Haneda ¶0093, ¶0167, ¶0206, Fig 4 and Fig 19B discloses selection of images and metadata selection) in which the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data) satisfying the first setting condition (Haneda ¶0089 discloses the multiple setting conditions of the camera that captures the images) and the imaging condition information (Haneda ¶0172 discloses the conditions being the conditions under which the image was taken) satisfying the second setting condition are recorded (Haneda ¶0089 discloses the multiple setting conditions of the camera that captures the images including the setting relating to the shooting mode which is related to the image conditions).
Regarding Claim 11, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 10,
wherein the imaging condition information (Haneda ¶0172 discloses the conditions being the conditions under which the image was taken) is information related to at least one of an apparatus that has captured the image (Haneda ¶0049 discloses a camera capturing the image of the object and focusing on certain objects within the frame including conditions of the image), image processing performed with respect to the image by the apparatus (Haneda ¶0046 and ¶0051 discloses performing image processing on the image), or an imaging environment of the image (Haneda ¶0092, ¶0138, ¶0157, ¶0259 discloses characteristics included in the image environment such as darkness).
Regarding Claim 12, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
suggestion processing of suggesting an additional condition (Haneda ¶0269, ¶0271 discloses determining if relearning is necessary or needs to be requested) different from the setting condition to a user (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section), wherein the additional condition (Haneda ¶0269, ¶0271 discloses determining if relearning is necessary or needs to be requested) is a condition set with respect (Haneda Fig 10 discloses how the relearning is determined based on test data) to the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved), additional image data (Haneda ¶0208 discloses attaching time stamps to the images during the learning process) is selected under the additional condition (Haneda ¶0269, ¶0271 discloses determining if relearning is necessary or needs to be requested) from non-selection image data of which the first information or the second information does not satisfy (Haneda ¶0194 discloses if the error detection is too high for the first training data, relearning being performed) the setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section), and in a case where the additional image data (Haneda ¶0208 discloses attaching time stamps to the images during the learning process) is selected, the training data is created (Haneda ¶0193 discloses creating second training data) in the creation processing based on the selection image (Haneda ¶0093, ¶0167, ¶0206, Fig 4 and Fig 19B discloses selection of images and metadata selection) data and on the additional image data (Haneda ¶0208 discloses attaching time stamps to the images during the learning process).
Regarding Claim 13, Haneda teaches a storage device (Haneda ¶0041, ¶0054 discloses a storage section) that stores a plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) to be used for creating the training data (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model) via the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1.
Regarding Claim 14, Haneda teaches a data processing system (Haneda Fig 4 and ¶0012 discloses the data processing system of the apparatus) comprising:
a data creation apparatus (Haneda ¶0005, disclose an image file generation device) that creates training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data) from image data (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model) in which accessory information is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved; and
a learning apparatus (Haneda ¶0017, ¶0044, and Fig 9 disclose a learning device) that performs machine learning using the training data (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model), the data processing system (Haneda Fig 4 and ¶0012 discloses the data processing system of the apparatus) being configured to execute:
setting processing of setting any setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) related to first information related to the machine learning (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or to second information related to a creator of the image data (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data), a creator of the accessory information (Haneda ¶0085 and Fig 3A 100 and discloses information about the camera creating metadata based on the captured images), or a right holder of the image data (Haneda ¶0098 discloses copyright and user rights in association with the training data) with respect to a plurality of pieces of the image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information is recorded (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data); and
creation processing of creating the training data (Haneda Fig 2 302 and ¶0068- ¶0070 discloses a creation section where the training data is created) based on selection image data (Haneda ¶0093, ¶0167, ¶0206, Fig 4 and Fig 19B discloses selection of images and metadata selection) in which the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data) satisfying the setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved).
Regarding Claim 15, Haneda teaches a data creation method (Haneda ¶0002, ¶0005, ¶0007 discloses an image file generating method) of creating training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data) used in machine learning from image data (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model) in which accessory information is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved), the data creation method (Haneda ¶0002, ¶0005, ¶0007 discloses an image file generating method) comprising:
a setting step of setting any setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) related to first information related to the machine learning (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or to second information related to a creator of the image data (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data), a creator of the accessory information (Haneda ¶0085 and Fig 3A 100 and discloses information about the camera creating metadata based on the captured images), or a right holder of the image data (Haneda ¶0098 discloses copyright and user rights in association with the training data) with respect to a plurality of pieces of the image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information is recorded (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data); and
a creation step of creating the training data (Haneda Fig 2 302 and ¶0068- ¶0070 discloses a creation section where the training data is created) based on selection image data (Haneda ¶0093, ¶0167, ¶0206, Fig 4 and Fig 19B discloses selection of images and metadata selection) in which the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) or the second information (Haneda ¶0090 discloses administration information related to the creation date of the inference model including the training data) satisfying the setting condition (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved).
Regarding Claim 16, Haneda teaches a program (Haneda ¶0042-¶0045 discloses a program) a causing a computer to function (Haneda ¶0330-¶0331, ¶discloses hardware that generally make up computers executing computer programs) as the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, the program causing the computer to execute (Haneda ¶0330-¶0331, ¶discloses hardware that generally make up computers executing computer programs) each of the setting processing (Haneda ¶0045, ¶0049 and Fig 1B 101b discloses a setting control section) and the creation processing (Haneda Fig 2 302 and ¶0068- ¶0070 discloses a creation section where the training data is created).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 5 and 7 are rejected under 35 U.S.C. 103 as unpatentable over Haneda et al. (US Patent Publication 20200242154 A1 hereafter referred to as Haneda) in view of Karia et al (US Patent Publication 2020/0364358 A1hereafter referred to as Karia).
Regarding Claim 5, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 4,
wherein the history information (Haneda Fig 21 Item 14 and Fig 22 S207 discloses history information) the training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data)
in the machine learning(Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model) using the training data (Haneda ¶0004 and Fig 13 discloses creating training data from image data) created based on the image data (Haneda ¶0007, Fig 3A and 3B discloses the training data being associated with image data to create an interference model).
Haneda does not explicitly disclose information related to whether or not is used as correct answer data.
Karia is in the same field of data permission in image analysis. Further, Karia teaches information related to whether or not (Karia ¶0079-¶0081 discloses using proof of work for the training of algorithm including correct answers) is used as correct answer data (Karia ¶0079-¶0081 discloses using proof of work for the training of algorithm including correct answers).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Haneda by incorporating the owner information and additional of correct answer training for the model as taught by Karia, to make an invention that can identify the owner information and its role in the use in training a model; thus one of ordinary skilled in the art would be motivated to combine the references since there is a need protect users since digitization has revolutionized and increased the amount of data generated about an individual, its holistic management on a secured platform is still a gap to fill. (Kiara ¶0038).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding Claim 7, Haneda teaches the data creation apparatus (Haneda ¶0005, disclose an image file generation device) according to claim 1, which is configured to further execute:
acquisition processing of acquiring (Haneda Fig 15 S21 discloses the acquisition of the first training data and ¶0179 discloses acquiring the image with a camera) the plurality of pieces of image data (Haneda Fig 12 105a discloses image data made up of multiple test data candidates and image files with metadata) in which the accessory information (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved) including the first information (Haneda ¶0049 discloses the setting control section setting interference by the interference engine) is recorded (Haneda ¶0006 and Fig 14 discloses metadata about the image Fig 12 105 discloses a storage section where metadata is saved), related to a copyright owner of the image data (Haneda ¶0098 discloses the copyright and portrait rights may be related to the users own personal needs).
Haneda does not explicitly disclose wherein the second information is owner information.
Karia is in the same field of data permission in image analysis. Further, Karia teaches wherein the second information is owner information (Karia ¶0055 discloses ownership and access information about assets and user data).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Haneda by incorporating the owner information and additional of correct answer training for the model as taught by Karia, to make an invention that can identify the owner information and its role in the use in training a model; thus one of ordinary skilled in the art would be motivated to combine the references since there is a need protect users since digitization has revolutionized and increased the amount of data generated about an individual, its holistic management on a secured platform is still a gap to fill. (Kiara ¶0038).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent US-20200242402-A1 to JUNG et al. discloses a method for recognizing an object in an image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL LYNN ROBERTS whose telephone number is (571)272-6413. The examiner can normally be reached Monday- Friday 7:30am- 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RACHEL L ROBERTS/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674