Prosecution Insights
Last updated: April 19, 2026
Application No. 17/902,009

LEARNING DATA GENERATING SYSTEM AND LEARNING DATA GENERATING METHOD

Non-Final OA §103
Filed
Sep 02, 2022
Examiner
TSAI, JAMES T
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Olympus Corporation
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
184 granted / 297 resolved
+7.0% vs TC avg
Strong +56% interview lift
Without
With
+56.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
316
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 297 resolved cases

Office Action

§103
NON-FINAL REJECTION, FIRST DETAILED ACTION Status of Prosecution The present application 17/902,009, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The application was filed in the Office on Sept. 2, 2022 and is a continuation of PCT Application PCT/JP2020/009215, filed on March 4, 2020. Claims 1-17 are pending and are all rejected in this rejection. Claims 1 and 17 are independent claims. Status of Claims Claims 1, 4-7 and 15-17 are rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021. Claims 2-3, 8-9 and 11 are rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021 in view of Wang et al. (“Wang”), United States Patent Application Publication 2012/0213432, published on Aug. 23, 2012. Claim 10 is rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021 in view of Wang et al. (“Wang”), United States Patent Application Publication 2012/0213432 published on Aug. 23, 2012 in further view of non-patent literature Moisan, “Periodic Plus Smooth Image Decomposition,” published online Oct. 27, 2010. Claims 12-14 are rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021 in view of Wang et al. (“Wang”), United States Patent Application Publication 2012/0213432 published on Aug. 23, 2012 in view of non-patent literature Kim et al, (“Kim”), “Median Filtered Image Restoration and Anti-Forensics Using Adversarial Networks,” published Feb. 2018. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. Claims 1, 4-7 and 15-17 are rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021. As to Claim 1, Verma teaches: A learning data generating system comprising a processor (Wang: par. 0020, a processor and memory may implement an ultrasound engine), the processor being configured to implement: acquiring a first image, a second image first correct information corresponding to the first image, and second correct information corresponding to the second image; inputting the first image to a first neural network to generate a first feature map by the first neural network and inputting the second image to the first neural network to generate a second feature map by the first neural network; generating a combined feature map; inputting the combined feature map to a second neural network to generate output information by the second neural network; (Examiner notes that Applicant has admitted that Verma is prior art, which is applicable to at least these limitations. Specification: Background, “In [Verma’s] method, two different images are input to a convolutional neural network (CNN) to extract a feature map that is output of an intermediate layer of the CNN, a feature map of the first image and a feature map of the second image are subjected to addition with weighting to combine the feature maps, and the combined feature maps are input to the next intermediate layer. In addition to learning based on two original images, learning of combining the feature maps in the intermediate layer is performed.” Verma further teaches: calculating an output error based on the output information, the first correct information, and the second correct information (Verma: eq. 1, is a loss function that is to be minimized based on the output and the correct information (x’, y’)); and updating the first neural network and the second neural network based on the output error (Verma: p. 5, backpropagation takes place). Verma may not explicitly teach: generating a combined feature map by replacing a part of the first feature map with a part of the second feature map. Yang teaches in general concepts related to joining two different ultrasound images together automatically (Yang: Abstract). Specifically, Yang teaches that the two images may overlap and may be stitched together in a way that one portion is removed, and therefore replaced by the other (Yang: par. 0092,“In some examples, if portions of the first and/or second images are redundant such that an overlap may occur, the portions may be cropped ( e.g., removed.”). Feature maps are combined as part of the convolution and stitching process, which in the case of overlap, as disclosed by Yang, could involve the replacing of part of the feature map (Yang: par. 0069). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified Verma by incorporating the use of replacing parts of the feature map for the overlapping images as taught and suggested by Yang. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the combined feature map to be continuous in nature. As to Claim 4, Verma and Yang teach the limitations of claim 1. Verma further teaches: wherein the processor implements calculating a first output error based on the output information and the first correct information, calculating a second output error based on the output information and the second correct information, and calculating a weighted sum of the first output error and the second output error as the output error (Verma: eq. 1 is one with weighted sum of the two errors which are based on the labeled information and the output information). As to Claim 5, Verma and Yang teach the limitations of claim 1. Yang further teaches: wherein the processor implements at least one of a first augmentation process of subjecting the first input image to data augmentation to generate the first image and a second augmentation process of subjecting the second input image to data augmentation to generate the second image (Yang: par. 0081, the spatial transformation matrix is calculated to be used for the images). As to Claim 6, Verma and Yang teach the limitations of claim 5. Yang further teaches: wherein the first augmentation process includes a process of performing, on the basis of a positional relationship between a first recognition target appearing in the first input image and a second recognition target appearing in the second input image, position correction of the first recognition target with respect to the first input image, and the second augmentation process includes a process of performing, on the basis of the positional relationship, position correction of the second recognition target with respect to the second input image (Yang: par. 0051, images may be transformed using a transformation matrix, par. 0054, a rotation; par. 0081, the spatial transformation matrix is calculated to be used for the images). As to Claim 7, Verma and Yang teach the limitations of claim 5. Verma further teaches: wherein the processor implements at least one of the first augmentation process and the second augmentation process by at least one process selected from color correction, brightness correction, a smoothing process, a sharpening process, noise addition (Yang: par. 0053, noise rejection), and affine transformation. As to Claim 15, Verma and Yang teach the limitations of claim 1. Yang further teaches: wherein the first image and the second image are ultrasonic images (Yang: Abstract, the images may be ultrasound images). As to Claim 16, Verma and Yang teach the limitations of claim 1. Verma further teaches: wherein the first image and the second image are classified in different classification categories (Verma: sec. 5.1, classifiers are applied to the different images, which can result in different categories). As to Claim 17, it is rejected for similar reasons as f claim 1. B. Claims 2-3, 8-9 and 11 are rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021 in view of Wang et al. (“Wang”), United States Patent Application Publication 2012/0213432, published on Aug. 23, 2012. As to Claim 2, Verma and Yang teach the limitations of claim 1. Verma and Yang may not explicitly teach: wherein the first feature map includes a first plurality of channels, the second feature map includes a second plurality of channels, and the processor implements replacing the whole of a part of the first plurality of channels with the whole of a part of the second plurality of channels. Wang teaches in general concepts related to automatic segmentation of vide sequences (Wang: Abstract). Specifically, Wang teaches that the video sequences are made up of different images and it is based on a weighted combination of segmentation shape prediction and the segmentation color model (Wang: Abstract). Wang notes that images may be composed of multiple color channels (Wang: par. 0006, the color channels may be red, green and blue and an alpha channel as well for transparency). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Verma-Yang combination by including the channels as part of the feature maps as taught by Wang. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the complete replacement of the feature maps with their corresponding channel information. As to Claim 3, Verma, Yang and Wang teach the limitations of claim 2. Yang further teaches: wherein the first image and the second image are ultrasonic images (Yang: Abstract, the images may be ultrasound images). As to Claim 8, Verma and Yang teach the limitations of claim 1. Verma and Yang may not explicitly teach: wherein the first feature map includes a first plurality of channels, the second feature map includes a second plurality of channels, and the processor implements replacing a partial region of a channel included in the first plurality of channels with a partial region of a channel included in the second plurality of channels. Wang teaches in general concepts related to automatic segmentation of vide sequences (Wang: Abstract). Specifically, Wang teaches that the video sequences are made up of different images and it is based on a weighted combination of segmentation shape prediction and the segmentation color model (Wang: Abstract). Wang notes that images may be composed of multiple color channels (Wang: par. 0006, the color channels may be red, green and blue and an alpha channel as well for transparency). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Verma-Yang combination by including the channels as part of the feature maps as taught by Wang. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the complete replacement of the feature maps with their corresponding channel information. As to Claim 9, Verma, Yang and Wang teach the limitations of claim 8. Wang further teaches: wherein the processor implements replacing a band-like region of the channel included in the first plurality of channels with a band-like region of the channel included in the second plurality of channels (Wang: par. 0015, bands around a contour may be considered for the sampling in a classifier. Examiner asserts the band of the contour would therefore correspond to the feature map channel replacement in the combination). As to Claim 11, Verma, Yang and Wang teach the limitations of claim 8. Yang further teaches: wherein the processor implements determining a size of the partial region to be replaced in the channel included in the first plurality of channels on the basis of classification categories of the first image and the second image (Yang: par. 0091, each classifier may determine a feature map for a respective portion of the image proximate to an estimated contour, which Examiner notes may be of different shapes and thus sizes). C. Claim 10 is rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021 in view of Wang et al. (“Wang”), United States Patent Application Publication 2012/0213432, published on Aug. 23, 2012 in further view of non-patent literature Moisan, “Periodic Plus Smooth Image Decomposition,” published online Oct. 27, 2010. As to Claim 10, Verma, Yang and Wang teach the limitations of claim 8. Verma, Yang and Wang may not explicitly teach: wherein the processor implements replacing a region set to be periodic in the channel included in the first plurality of channels with a region set to be periodic in the channel included in the second plurality of channels. Moisan teaches inn general decomposing images into a sum of a periodic component and a smooth component (Moisan: Abstract). Specifically, Moisan teaches that a periodic image may be (Moisan: Proposition 4, eq. 24, “(with periodicity) a periodic image, we should not be able, with local inspection, to determine where the original frame border was.”). This method may be applied to various channels of an image (Moisan: Conclusion). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Verma-Yang-Wang combination by considering periodicity in the channels as part of the feature maps for replacement as taught by Moisan. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the preservation of certain periodic elements in a channel. D. Claims 12-14 are rejected under 35 USC § 103 as being unpatentable over non-patent literature, Verma et al. (“Verma”), Manifold Mixup: Better Representations by Interpolating Hidden States, published on May 11, 2019 in view of Yang et al. (“Yang”), United States Patent Application Publication 2021/0068788, published on Mar. 11, 2021 in view of Wang et al. (“Wang”), United States Patent Application Publication 2012/0213432, published on Aug. 23, 2012 in view of non-patent literature Kim et al, (“Kim”), “Median Filtered Image Restoration and Anti-Forensics Using Adversarial Networks,” published Feb. 2018. As to Claim 12, Verma and Yang teach the limitations of claim 1. Verma and Yang may not explicitly teach: wherein the processor implements: replacing a part of the first feature map with a part of the second feature map at a first rate; and calculating a first output error based on the output information and the first correct information, calculating a second output error based on the output information and the second correct information, calculating a weighted sum of the first output error and the second output error by weighting based on the first rate, and defining the weighted sum as the output error. Kim teaches in general median filtering image restoration using adversarial networks and deep convolutional neural networks to remove traces from the median filtered images (Kim: Abstract). Specifically, Kim teaches that the rate of replacement of a center portion of the original image resulted and the effect on error rates was noted (Kim: Fig. 3, related discussion). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Verma-Yang combination by considering the replacement rates and their effect on error calculations as taught and suggested by Kim. Such a person would have been motivated to do so with a reasonable expectation of success to allow for better consideration of the effect of replacement rates. As to Claim 13, Verma, Yang and Kim teach the limitations of claim 12. Verma, Yang and Kim further teaches: wherein the processor implements calculating the weighted sum of the first output error and the second output error at a rate same as the first rate (Examiner notes that nothing in Kim suggests that the rates would be the same or not the same for the effectuated calculation of the weighted sum). As to Claim 14, Verma, Yang and Kim teach the limitations of claim 12. Verma, Yang and Kim further teaches: wherein the processor implements calculating the weighted sum of the first output error and the second output error at a rate different from the first rate(Examiner notes that nothing in Kim suggests that the rates would be the same or not the same for the effectuated calculation of the weighted sum). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T TSAI whose telephone number is (571)270-3916. The examiner can normally be reached M-F 8-5 Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistancal e from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000./JAMES T TSAI/ /JAMES T TSAI/Primary Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Sep 02, 2022
Application Filed
Feb 13, 2026
Non-Final Rejection — §103
Mar 19, 2026
Interview Requested
Mar 25, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585958
MMETHOD AND SYSTEM FOR TWO-STEP HIERARCHICAL MODEL OPTIMIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12577416
METHOD FOR GENERATING A COMPOSITION FOR DYES, PAINTS, PRINTING INKS, GRIND RESINS, PIGMENT CONCENTRATES OR OTHER COATING SUBSTANCES
2y 5m to grant Granted Mar 17, 2026
Patent 12579413
Method and Apparatus for Performing Convolution Neural Network Operations
2y 5m to grant Granted Mar 17, 2026
Patent 12566985
METHOD AND SYSTEM FOR PERFORMING DATA PREDICTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561569
INFORMATION PROCESSING METHOD FOR REDUCSING STORAGE REQUIREMENTS FOR WEIGHT PARAMETER VALUES OF LEARNED DATA SETS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+56.0%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 297 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month