Prosecution Insights
Last updated: April 19, 2026
Application No. 18/031,429

LEARNING METHOD, LEARNED MODEL, DETECTION SYSTEM, DETECTION METHOD, AND PROGRAM

Non-Final OA §103
Filed
Apr 12, 2023
Examiner
KOETH, MICHELLE M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Omron Corporation
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
331 granted / 429 resolved
+15.2% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
62.2%
+22.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 429 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 20, 2026 has been entered. Response to Arguments Applicant’s amendment to cancel claims 1 and 3–5 in the Amendment with RCE filed February 20, 2026 (herein “Amendment”) have mooted the rejection under 35 U.S.C. 101 against these claims. Accordingly, the rejection is withdrawn. Applicant’s amendments and arguments in the Amendment regarding the rejection of remaining pending claims 6 and 8–9 under 35 U.S.C. 103 have been fully considered but are not persuasive. Specifically, Applicant argues on page 6 that primary reference Kimura does not teach or suggest the claimed “determination result acquisition processing of inputting the mask image prepared in the detection mask image preparation processing to the learned model and acquiring … corresponding to the mask image prepared in the detection masking image preparation processing,” because teachings of Kimura’s Fig. 5 include an Rmask, but not that this Rmask is input to the learned model for acquiring corresponding to the mask image. However, Kimura’s fig. 5, and the Rmask teachings are disclosed in col. 7, ll. 36–67, which is a different portion, with different teachings than that of the relied upon portions in the rejection, namely Kimura in col. 16, ll. 1–4, and col. 9, ll. 7–51. In the actually cited portions of Kimura for the limitation at issue, Kimura teaches first in col. 16, ll. 1–4: PNG media_image1.png 83 417 media_image1.png Greyscale Turning then to col. 9, ll. 7–51, teaching the details of steps S36 and subsequent steps, here Kimura teaches that background model updating processing is using a GMM as the background model and the processing is “based on” (thus having as an input) the second frame of the input image and the background frame of the first frame, where the “background frame of the first frame” is described as (initial mask area) in l. 2 of the excerpt reproduced below for convenience: PNG media_image2.png 481 429 media_image2.png Greyscale Therefore, the cited portions of Kimura teach the following claim limitations, where it is noted that in square brackets are the portions where Endoh is relied upon “determination result acquisition processing of inputting the mask image prepared in the detection mask image preparation processing to the learned model and acquiring [the first determination result and the second determination result] corresponding to the mask image prepared in the detection mask image preparation processing from the learned model,” and accordingly, the rejection for the this portion of the claim, which has not been amended, in reliance upon the combination of Kimura and Endoh is maintained. Applicant has further amended claims 6 and 9 to clarify that the detection mask image preparation processing is “by setting a mask region covering a specific portion of the region of attention, the specific portion being masked in the mask image,” but Applicant has not specifically argued the distinction over the currently cited art of these limitations. Accordingly, in further consideration, currently cited Kimura is found to teach these newly amended limitations with the rejection rationale updated below accordingly. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 6 and 8–10 are rejected under 35 U.S.C. 103 as being unpatentable over Kimura, US Patent No. 9,202,285 B2 (herein “Kimura”) in view of Endoh et al., US Patent No. 11,210,513 B2 (herein “Endoh”). Regarding claim 6, with deficiencies of Kimura noted in square brackets [], Kimura teaches a detection system comprising (Kimura fig. 18, drive 910 and removable medium 911, col. 17, l. 54–col. 18, l. 17, and col. 16, ll. 5–22 and ll. 52–56, computer that executes the series of processing disclosed, including object detection from an image): a storage configured to store a learned model (Kimura col. 17, l. 59–col. 18, l. 20, storage unit 908 installing a program performing the disclosed processing from medium 911 and drive 910, where col. 14, ll. 59–64, teaches a background model updated as part of the processing); and an arithmetic circuit (Kimura col. 17, l. 59–col. 18, l. 8, CPU for executing the program), wherein the learned model learns: a first relationship between first non-mask information based on a portion excluding a first mask region in a first mask image in which the first mask region covering a first specific portion is set (Kimura col. 14, ll. 45–58, positional information indicating the position of the object in the image is supplied to a mask area setting unit to set an initial mask area that masks the object in the image, where col. 14, ll. 59–67, teaches a background model is trained via a background model updating unit to understand areas other than the initial mask area in an input image as being a background image (portion excluding the mask region in the mask image)) and [a first determination result indicating whether or not the first mask image includes a target object region in which a target object is present]; and a second relationship between second non-mask information based on a portion excluding a second mask region in a second mask image in which the second mask region covering a second specific portion is set (Kimura col. 15, ll. 13–15, col. 15, l. 55–col. 16, l. 5, fig. 13, teach that the process shown repeats a plurality of times, including the mask setting step similar to S213 of fig. 12, and where S236 (similar to S214) updates the background model for a second frame) and [a second determination result indicating whether or not the second mask image includes the target object region], and wherein the arithmetic circuit executes: detection target image acquisition processing of acquiring a detection target image (Kimura col. 15, ll. 20–25, second frame is input for object detection processing); region of attention setting processing of setting a part or a whole of the detection target image as a region of attention (Kimura col. 15, ll. 20–25, object detection processing performed on second frame image to detect and object and determine area information representing an object area (region of attention)); detection mask image preparation processing of preparing the mask image from the region of attention by setting a mask region covering a specific portion on the region of attention, the specific portion being masked in the mask image (Kimura col. 15, ll. 25–66, positional information indicating the position of the object in the image is supplied to a mask area setting unit to set an initial mask area (mask region covering the specific portion of an object – note that the broadest reasonable interpretation of “specific portion” can include the entire portion as well) that masks the object in the image, the now masked image being a “mask image”); determination result acquisition processing of inputting the mask image prepared in the detection mask image preparation processing to the learned model and acquiring [the first determination result and the second determination result] corresponding to the mask image prepared in the detection mask image preparation processing from the learned model (Kimura col. 16, ll. 1–4, col. 9, ll. 7–51, step 236 and subsequent performed similar to step S36 and subsequent including steps S36 and S37, where the background image from the initial mask area is compared (inputting the mask image) and as a result information on the area/portion that does not exist in the background image can be obtained, and a foreground separated from the background); and [determination processing of determining whether or not the region of attention includes the target object region based on the first determination result and the second determination result acquired in the determination result acquisition processing]. Kimura does not explicitly teach, but Endoh teaches a first determination result indicating whether or not the mask image includes a target object region in which a target object is present, and a second determination result indicating whether or not the second mask image includes the target object region, and therefore the first and second determination result (Endoh col. 5, ll. 4–18, and 45–54, training data including an image, a mask of a part of an object associated with an image, a mask of the entire object, and an identifier of the target object in the image, where the presence of the object identifier serves as a determination result as to whether or not the corresponding mask image includes a target object, and where fig. 5 illustrates multiple images in the training data set that are processed to output a determination result, thus a first and second determination result); determination processing of determining whether or not the region of attention includes the target object region based on the first determination result and the second determination result acquired in the determination result acquisition processing (Endoh col. 8, ll. 42–63, identification section identifies position of target object in the input image based on an area of a region in the image (region of attention), and outputs a detection candidate for display, and where fig. 5 illustrates multiple images in the training data set that are processed to output a determination result, thus a first and second determination result). Therefore, taking the teachings of Kimura and Endoh together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the training data preparation of Kimura with the object identifier data and detection disclosed in Endoh at least because doing so would increase detection accuracy of an object in image data with partial blocking/masking of the object. See Endoh col. 3, ll. 29–33. Regarding claim 8, Kimura teaches wherein when the region of attention includes the target object region (Kimura col. 15, ll. 20–25, object detection unit supplies area information representing an object area as a rectangular area), the determination processing identifies a shielded region in which a part of the target object is shielded in the region of attention (Kimura col. 15, ll. 61–67, mask area determined from the mask over the object area) based on the first of determination results and the second determination result (Kimura col. 15, ll. 13–15, col. 16, ll. 1-4, steps in fig. 13 continuously performed, including an output of the background model which is trained on, and thus will be based on the plurality of the determination results from prior iterations). Regarding claim 9, with deficiencies of Kimura noted in square brackets [], Kimura teaches a detection method executed by an arithmetic circuit using a learned model (Kimura fig. 18, drive 910 and removable medium 911, col. 17, l. 54–col. 18, l. 17, and col. 16, ll. 5–42 and ll. 52–56, computer that executes the series of processing disclosed, including object detection from an image, using a trained background model), wherein the learned model learns: a first relationship between first non-mask information based on a portion excluding a first mask region in a first mask image in which the first mask region covering a first specific portion is set (Kimura col. 14, ll. 45–58, positional information indicating the position of the object in the image is supplied to a mask area setting unit to set an initial mask area that masks the object in the image) and [a first determination result indicating whether or not the first mask image includes a target object region in which a target object is present]; and a second relationship between second non-mask information based on a portion excluding a second mask region in a second mask image in which the second mask region covering a second specific portion is set (Kimura col. 14, ll. 59–67, col. 15, l. 55–col. 16, l. 5, the background model is updated again via the transition processing for a second frame that has also been processed to identify a mask area and areas other than the mask area) and [a second determination result indicating whether or not the second mask image includes the target object region], the detection method comprising: detection target image acquisition processing of acquiring a detection target image (Kimura col. 15, ll. 20–25, second frame is input for object detection processing); region of attention setting processing of setting a part or a whole of the detection target image as a region of attention (Kimura col. 15, ll. 20–25, object detection processing performed on second frame image to detect and object and determine area information representing an object area (region of attention)); detection mask image preparation processing of preparing a mask image from the region of attention by setting a mask region covering a specific portion on the region of attention, the specific portion being masked in the mask image (Kimura col. 15, ll. 25–66, positional information indicating the position of the object in the image is supplied to a mask area setting unit to set an initial mask area (mask region covering the specific portion of an object – note that the broadest reasonable interpretation of “specific portion” can include the entire portion as well) that masks the object in the image, the now masked image being a “mask image”); determination result acquisition processing of inputting the mask image prepared in the detection mask image preparation processing to the learned model and acquiring [the first determination result and the second determination result] corresponding to the mask image prepared in the detection mask image preparation processing from the learned model (Kimura col. 16, ll. 1–4, col. 9, ll. 7–51, step 236 and subsequent performed similar to step S36 and subsequent including steps S36 and S37, which are repeated, where the background image from the initial mask area is compared (inputting the mask image) and as a result information on the area/portion that does not exist in the background image can be obtained, and a foreground separated from the background); and [determination processing of determining whether or not the region of attention includes the target object region based on the first determination result and the second determination result acquired in the determination result acquisition processing]. Kimura does not explicitly teach, but Endoh teaches a first determination result indicating whether or not the mask image includes a target object region in which a target object is present and a second determination result indicating whether or not the second mask image includes the target object region, and the first and second determination result (Endoh col. 5, ll. 4–18, and 45–54, training data including an image, a mask of a part of an object associated with an image, a mask of the entire object, and an identifier of the target object in the image, where the presence of the object identifier serves as a determination result as to whether or not the corresponding mask image includes a target object, , and where fig. 5 illustrates multiple images in the training data set that are processed to output a determination result, thus a first and second determination result); determination processing of determining whether or not the region of attention includes the target object region based on the first determination result and the second determination result acquired in the determination result acquisition processing (Endoh col. 8, ll. 42–63, identification section identifies position of target object in the input image based on an area of a region in the image (region of attention), and outputs a detection candidate for display, and where fig. 5 illustrates multiple images in the training data set that are processed to output a determination result, thus a first and second determination result). Therefore, taking the teachings of Kimura and Endoh together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the training data preparation of Kimura with the object identifier data and detection disclosed in Endoh at least because doing so would increase detection accuracy of an object in image data with partial blocking/masking of the object. See Endoh col. 3, ll. 29–33. Regarding claim 10, Kimura teaches a non-transitory computer-readable storage medium storing a program for causing an arithmetic circuit to execute the detection method according to claim 9 (Kimura col. 17, l. 59–col. 18, l. 20, storage unit 908 installing a program performing the disclosed processing from medium 911 and drive 910, a CPU for executing the program). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MICHELLE M. KOETH Primary Examiner Art Unit 2671 /MICHELLE M KOETH/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Jul 22, 2025
Non-Final Rejection — §103
Oct 08, 2025
Interview Requested
Oct 14, 2025
Examiner Interview Summary
Oct 14, 2025
Applicant Interview (Telephonic)
Oct 24, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103
Feb 20, 2026
Request for Continued Examination
Feb 27, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586221
METHOD AND APPARATUS FOR ESTIMATING DEPTH INFORMATION OF IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12579651
IMPEDED DIFFUSION FRACTION FOR QUANTITATIVE IMAGING DIAGNOSTIC ASSAY
2y 5m to grant Granted Mar 17, 2026
Patent 12567241
Method For Generating Training Data Used To Learn Machine Learning Model, System, And Non-Transitory Computer-Readable Storage Medium Storing Computer Program
2y 5m to grant Granted Mar 03, 2026
Patent 12567177
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE PROCESSING
2y 5m to grant Granted Mar 03, 2026
Patent 12566493
METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
94%
With Interview (+16.7%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 429 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month