Prosecution Insights
Last updated: April 19, 2026
Application No. 18/179,329

TRAINING DATA CREATION APPARATUS, METHOD, AND PROGRAM, MACHINE LEARNING APPARATUS AND METHOD, LEARNING MODEL, AND IMAGE PROCESSING APPARATUS

Final Rejection §102§103
Filed
Mar 06, 2023
Examiner
HUYNH, VAN D
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
630 granted / 721 resolved
+25.4% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
746
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
30.9%
-9.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment No claim is amended. Claims 1-19 are pending in this application. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 7-8, 10-14, 17, and 19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang et al., US 2019/0332118. Regarding claim 1, Wang discloses a training data creation apparatus comprising a first processor (fig. 6, element 616 or 644; para 0061 and 0082; the vehicle computing device 604 or the computing device(s) 642 can include one or more processors of the) that creates training data for machine learning (figs. 6-7; para 0063 and 0092; training a machine learning algorithm to output one or more masks associated with one or more objects), wherein the first processor: acquires, as a single training sample, a single image and a plurality of first ground-truth region masks for the single image (fig. 1, elements 102 and 112; para 0022 and 0027-0029; fig. 5, elements 502 and 506; para 0051-0052; fig. 7, elements 702 and 706; para 0093 and 0095; capturing sensor data (i.e., image data) and receiving a first mask representing an object in the voxel space associated with the sensor data); generates a single second ground-truth region mask from the plurality of first ground-truth region masks (fig. 1, element 122; para 0031-0033; fig. 5, element 516; para 0057; para 0072; fig. 7, element 710; para 0097; expanding the first mask to generate a second mask); and outputs, as training data, a pair of the single image and the single second ground-truth region mask (fig. 1, element 128; para 0034; 0073; fig. 7, element 712; para 0098; segmenting, based at least in part on the second mask, the sensor data. In some instances, the second mask can be associated with an object in the voxel space. Additionally, in some instances, the second mask can be associated with a portion of the sensor data; transmitting the machine learning algorithm to a system for segmenting captured sensor data associated with the second mask). Regarding claim 2, the training data creation apparatus according to claim 1, Wang further discloses wherein the first processor acquires, as the plurality of first ground-truth region masks for the single image, ground-truth region masks each assigned to the single image by a plurality of evaluators (fig. 6, elements 632-638; para 0068-0073; plurality of components). Regarding claim 3, the training data creation apparatus according to claim 1, Wang further discloses wherein the first processor inputs the single image into each of a plurality of first region extractors trained by machine learning in advance using a ground-truth region mask of each of a plurality of evaluators, and acquires, as the plurality of first ground-truth region masks for the single image, a plurality of region extraction results outputted by the plurality of first region extractors (fig.6-7; para 0063, 0092-0093, 0095, and 0097). Regarding claim 7, the training data creation apparatus according to claim 1, Wang further discloses wherein the first processor acquires, as the single second ground-truth region mask, any of: a ground-truth region mask in which a ground-truth region is a region of a common portion of the plurality of first ground-truth region masks; a ground-truth region mask in which the ground-truth region is a region of a union of the plurality of first ground-truth region masks; a ground-truth region mask in which the ground-truth region is a region containing pixels determined to be the ground truth by a majority decision for each pixel in the plurality of first ground-truth region masks; a ground-truth region mask combined by averaging the plurality of first ground-truth region masks; and a first ground-truth region mask which is selected from the plurality of first ground-truth region masks and which has the ground-truth region of maximum or minimum area (figs. 4A-4B; para 0031-0033 and 0046-0049). Regarding claim 8, the training data creation apparatus according to claim 1, Wang further discloses comprising: a recording apparatus storing a training data set containing a plurality of the training data (para 0061 and 0063). Regarding claim 10, Wang further discloses a machine learning apparatus comprising: a second processor (fig. 6, element 616 or 644; para 0061 and 0082) and a second region extractor (fig. 6, element 622 or 648; para 0061 and 0083), wherein the second processor uses machine learning to train the second region extractor using the training data created by the training data creation apparatus according to claim 1 (para 0063, 0083, and 0087-0089). Regarding claim 11, the machine learning apparatus according to claim 10, Wang further discloses wherein the second region extractor is a learning model configured as a convolutional neural network (para 0087-0089). Regarding claim 12, Wang further discloses a trained learning model configured as the convolutional neural network, being the second region extractor trained by machine learning performed by the machine learning apparatus according to claim 11 (para 0087-0089). Regarding claim 13, Wang further discloses an image processing apparatus comprising the learning model according to claim 12 (para 0087-0089). Regarding claim 14, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Regarding claim 17, Wang further discloses a machine learning method for training a second region extractor (fig. 6, element 622 or 648; para 0061 and 0083) by a second processor (fig. 6, element 616 or 644; para 0061 and 0082) using machine learning using the training data created according to the training data creation method according to claim 14 (para 0063, 0083, and 0087-0089). Regarding claim 19, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 6, 9, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., US 2019/0332118 in view of Wang, US 2020/0167929. Regarding claim 6, the training data creation apparatus according to claim 1, Wang ‘118 further discloses wherein the first processor further acquires diagnostic information, and generates the single second ground-truth region mask using the first ground-truth region masks matching the diagnostic information from among the plurality of first ground-truth region masks (figs. 4A-4B; para 0032-0033, 0046-0049). Wang ‘118 discloses claim 6 as enumerated above, but Wang ‘118 does not explicitly disclose biological tissue as claimed. However, Wang ‘929 discloses medical image segmentation is a technology of detecting and extracting areas or boundaries of target tissues from medical images, and separating target tissues from other tissues (para 0003). Therefore, taking the combined disclosures of Wang ‘118 and Wang ‘929 as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate medical image segmentation is a technology of detecting and extracting areas or boundaries of target tissues from medical images, and separating target tissues from other tissues as taught by Wang ‘929 into the invention of Wang ‘118 for the benefit of achieving three-dimensional visualization, three-dimensional locating, tissue quantitative analysis, surgical planning and computer aided diagnosis in medical fields (Wang ‘929: para 0003). Regarding claim 9, the training data creation apparatus according to claim 1, Wang ‘118 further discloses wherein the single image is an image and the plurality of first ground-truth region masks are ground-truth region masks indicating a region of interest each assigned to the image by a plurality of evaluators (fig. 6, elements 632-638; para 0068-0073; plurality of components). Wang ‘118 discloses claim 6 as enumerated above, but Wang ‘118 does not explicitly disclose biological tissue as claimed. However, Wang ‘929 discloses medical image segmentation is a technology of detecting and extracting areas or boundaries of target tissues from medical images, and separating target tissues from other tissues (para 0003). Therefore, taking the combined disclosures of Wang ‘118 and Wang ‘929 as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate medical image segmentation is a technology of detecting and extracting areas or boundaries of target tissues from medical images, and separating target tissues from other tissues as taught by Wang ‘929 into the invention of Wang ‘118 for the benefit of achieving three-dimensional visualization, three-dimensional locating, tissue quantitative analysis, surgical planning and computer aided diagnosis in medical fields (Wang ‘929: para 0003). Regarding claim 16, this claim recites substantially the same limitations that are performed by claim 6 above, and it is rejected for the same reasons. Allowable Subject Matter Claims 4-5, 15, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art made of record and considered pertinent to the applicant's disclosure, taken individually or in combination, does not teach the claimed invention having the following limitations, in combination with the remaining claimed limitations. Regarding dependent claims 4 and 15, the prior art does not teach or suggest the claimed invention having “wherein the first processor calculates a sample weighting such that the higher a degree of disagreement among the plurality of first ground-truth region masks is, the smaller the sample weighting of the training sample during machine learning is, and outputs, as training data, the pair of the single image and the single second ground-truth region mask together with the calculated sample weighting”, and a combination of other limitations thereof as recited in the claims. Regarding claims 5 and 18, the claim has been found allowable due to its dependencies to claims 4 and 15 above. Response to Arguments Applicant's arguments filed 11/07/2025 have been fully considered but they are not persuasive. Regarding independent claim 1, Applicant argues that Wang does not disclose “1) generates a single second ground-truth region mask from the plurality of first ground-truth region masks and 2) outputs, as training data, a pair of the single image and the single second ground-truth region mask” as claimed. Examiner respectfully disagrees. As stated in the rejection above, 1) Wang discloses the system can generate the second mask using margin data associated with the first mask from the machine learning algorithm. In other instances, the system can generate the second mask while referencing an additional mask representing an additional object in the voxel space (para 0023). Further, the mask associated with voxels associated with a “pedestrian” classification can be generated at as a fixed size, while in another example, a mask associated with voxels associated with a “vehicle” classification can be generated based on a size of the voxel data (para 0052). Therefore, Wang discloses generating a second mask from a plurality of first masks. 2) Wang further discloses transmitting the machine learning algorithm to a system for segmenting captured sensor data associated with the second mask (para 0034, 0073, and 0098). This implies that the machine learning algorithm is trained with sensor data and the second mask as training data in order to perform segmentation/classification by the system. Further, Wang discloses create a training dataset for use in a machine learning algorithm to identify classes in the data. The training dataset includes objects represented in the sensor data and ground truth information representing a mask. The training dataset can be used to train a machine learning algorithm to identify objects within the sensor data. Once the machine learning algorithm is trained, the machine learning algorithm can then output one or more masks representing one or more objects based on the sensor data (fig. 7, elements 706-712; para 0063 and 0095-0098). Therefore, Wang discloses the training dataset used to train the machine learning algorithm includes objects represented in the sensor data and ground truth information representing a mask. The MPEP 2111 states that the USPTO must employ the “broadest reasonable interpretation" of the claims. With the broadest reasonable interpretation, Examiner interprets the claimed “generates a single second ground-truth region mask from the plurality of first ground-truth region masks and outputs, as training data, a pair of the single image and the single second ground-truth region mask”, in light of the specification, as generating a second mask from a plurality of first masks and the training dataset used to train the machine learning algorithm includes objects represented in the sensor data and ground truth information representing a mask. Therefore, the claimed “generates a single second ground-truth region mask from the plurality of first ground-truth region masks and outputs, as training data, a pair of the single image and the single second ground-truth region mask” reads on the disclosure of Wang. In view of the above arguments, the Examiner believes all rejections are proper and should be maintained. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAN D HUYNH/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Aug 20, 2025
Non-Final Rejection — §102, §103
Nov 07, 2025
Response Filed
Feb 23, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602798
METHOD AND APPARATUS FOR GENERATING SUBJECT-SPECIFIC MAGNETIC RESONANCE ANGIOGRAPHY IMAGES FROM OTHER MULTI-CONTRAST MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602784
MEDICAL DEVICE FOR TRANSCRIPTION OF APPEARANCES IN AN IMAGE TO TEXT WITH MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12594046
METHOD AND APPARATUS FOR ASSISTING DIAGNOSIS OF CARDIOEMBOLIC STROKE BY USING CHEST RADIOGRAPHIC IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12586186
JAUNDICE ANALYSIS SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12582345
Systems and Methods for Identifying Progression of Hypoxic-Ischemic Brain Injury
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.4%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month