Prosecution Insights
Last updated: April 19, 2026
Application No. 18/145,016

IMAGING SUPPORT DEVICE, IMAGING APPARATUS, IMAGING SUPPORT METHOD, AND PROGRAM

Non-Final OA §102§103
Filed
Dec 22, 2022
Examiner
OSINSKI, MICHAEL S
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
466 granted / 619 resolved
+13.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/2/2026 has been entered. Information Disclosure Statement 2. The information disclosure statement(s) (IDS) submitted on 1/20/2026 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Response to Arguments 3. Applicant’s arguments with respect to the previously pending claims have been fully considered but they are moot in view of new ground(s) of rejection necessitated by Applicant’s amendments to the pending claims, wherein the new ground(s) of rejection incorporate different citations/explanations of the previously cited and relied upon prior art references. Additionally, it is noted that the Applicant argues that the previously cited and relied upon prior art reference Matsumoto (US PGPub 2011/0221922) fails to disclose “generating an imaging support screen and display the imaging support screen on a live view image in a superimposed manner”. The Examiner respectfully disagrees. There are multiple instances within Matsumoto where an imaging support screen is generated and displayed on a live view image in a superimposed manner. For example, Figures 3A, 4A, 5A, and 6 show various examples where various buttons, icons, and messages (shown as components 12, 24, 30, 36) are displayed in a superimposed manner on a live view image being photographed and displayed which function to guide a user of the device in capturing appropriate/desired types of images whose conditions are monitored within the live view image, and therefore it is concluded that the limitations of “generating an imaging support screen and display the imaging support screen on a live view image in a superimposed manner” are anticipated within the disclosure of Matsumoto. Claim Rejections – 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1, 3-4, 9-13, 16-17, 19-20, 22, and 24-26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Matsumoto (US PGPub 2011/0221922) [hereafter Matsumoto]. 5. As to claim 1, Matsumoto discloses an imaging support device (shooting device 200 shown in Figure 1) comprising: a processor (computer embodying the various components of the shooting device); and a memory (non-transitory computer readable recording medium storing computer program code causing the computer to execute the disclosed shooting assistance method/operations) connected to or built in the processor, wherein the processor is configured to acquire frequency information (tally information) indicating a frequency of a feature of a subject (extracted facial features used to determine shooting tendency factors such as facial expressions, poses, postures, face angles, face size, number of people, combination of subjects, etc.) specified from a captured image obtained by imaging with an imaging apparatus (photography means 202), the feature being classified into a category (specific sub-categories of shooting tendency factors such as angry faces, crying faces, front and profile poses, etc.) based on the feature, perform support processing (photography assistance processing by displaying user interfaces shown in Figures 3-11 enabling a user to visually recognize and choose the types of images being emphasized for recording as well as instructions and suggestions for capturing new images) of supporting the imaging with the imaging apparatus based on the frequency information, and generate an imaging support screen (various icons/buttons/messages 12, 24, 30, 36 as shown in Figures 3A, 4A, 5A, and 6) and display the imaging support screen on a live view image in a superimposed manner, wherein the category includes a plurality of large categories comprising a first large category (categories for an individual person A-E being imaged and/or categories of shooting tendency factors such as facial expression, pose, posture, face angle, etc.) and a second large category (categories for another individual person A-E being imaged and/or categories of other shooting tendency factors such as facial expression, pose, posture, face angle, etc.), the first large category includes a plurality of first small categories (shooting tendency factors for an individual A-E such as facial expression, pose, posture, face angle, etc. and/or sub-categories of the shooting tendency factors such as angry, crying, front profile, looking up, specific combinations of individuals, etc.) subordinate to the first large category, the plurality of first small categories are classified into a plurality of features that are subordinate to the feature of the first large category, the second large category includes a plurality of second small categories (shooting tendency factors for another individual A-E such as facial expression, pose, posture, face angle, etc. and/or other sub-categories of the shooting tendency factors such as angry, crying, front profile, looking up, specific combinations of individuals, etc.) subordinate to the second large category, the plurality of second small categories are classified into a plurality of features that are subordinate to the feature of the second large category, at least one target category is selected from the first large category, the second large category, the plurality of first small categories, or the plurality of second small categories, the target category is a category determined based on the frequency information (tallying means 208 tallies the amount of image data containing a person as well as the shooting tendency factors associated with each imaged person), and the support processing is processing including processing of supporting the imaging for a target category subject having the feature belonging to the target category (as shown in Figures 6-7 where it is determined that subject C has a low tally amount indicating the amount of images captured and a low tally amount for subject C being depicted in a profile imaging position, therefore guidance is displayed/issued to enable a user to successfully capture a profile image of subject C) (Paragraphs 0010, 0018-0019, 0044-0045, 0047-0048, 0050-0051, 0055, 0060-0064, 0067-0068, 0071, 0074-0081, 0085, 0088, 0093-0098, 0104-0105, a shooting device implementing a computer executing computer program code stored on a non-transitory CRM receives images captured by photography means 202 that includes lenses and image sensors and supplies the captured images to an identification data production means 204 and a matching means 206 that perform facial recognition to extract all the faces of the people present within the images and determine specific shooting tendency factors that include categories of facial expressions, poses, postures, angles, sizes, etc., from the extracted facial data and enable the device to tally each instance of the recognition of the faces having characteristics of the shooting tendency factors which displayed to a user for selection based on a user interaction with various icons/buttons/messages superimposed on a live view/preview image being displayed on the display means and used to drive a photography assistance function that provides guidance to the user how to acquire images for specific subjects that have the specified shooting tendency factors based on the recorded tallies for each factor). 6. As to claim 3, Matsumoto discloses the support processing is processing including display processing of performing display for recommending to image the target category subject (as shown in Figures 5-7) (Paragraphs 0078-0081, 0085). 7. As to claim 4, Matsumoto discloses the display processing is processing of displaying an image for display obtained by the imaging with the imaging apparatus on a display (display means 214) and displaying a target category subject image (as shown in Figures 5-7) indicating the target category subject in the image for display in an aspect that is distinguishable (highlight frame 34/38) from other image regions (Paragraphs 0064, 0078-0081, 0085). 8. As to claim 9, Matsumoto discloses in a case in which the target category subject is imaged by the imaging apparatus, the target category is a category into which the feature for the target category subject is classified, the category being determined in accordance with a state of the target category subject (Paragraphs 0047-0048, 0071). 9. As to claim 10, Matsumoto discloses in a case in which a plurality of objects are imaged by the imaging apparatus (such as shown in Figures 3-6), the target category is an object target category in which each of the plurality of objects themselves is able to be specified (Paragraphs 0047-0048, 0071). 10. As to claim 11, Matsumoto discloses the category is created for at least one unit (Paragraphs 0048, 0088, 0095-0098). 11. As to claim 12, Matsumoto discloses wherein one of the units is a period (Paragraph 0088). 12. As to claim 13, Matsumoto discloses wherein one of the units is a position (Paragraphs 0048, 0085). 13. As to claim 16, Matsumoto discloses the support processing is processing including processing of displaying the frequency information (as shown in Figures 3-5 and 10-11) (Paragraphs 0076-0084, 0093-0098). 14. As to claim 17, Matsumoto discloses the support processing is processing including processing of, in a case in which the frequency information is designated by a reception device (user interface enabling user clicking on display to select appropriate options/buttons being displayed) in a state in which the frequency information is displayed, supporting the imaging related to the category corresponding to the designated frequency information (Paragraphs 0092-0098). 15. As to claims 19-20 and 22, the Matsumoto reference discloses all claimed subject matter as discussed above with respect to the comments/citations of claim 1. 16. As to claim 24, Matsumoto discloses each of the plurality of first small categories is associated with the frequency information regarding the feature classified into the corresponding first small category, wherein each of the plurality of second small categories is associated with the frequency information regarding the feature classified into the corresponding second small category (Paragraphs 0048-0051, 0085, 0094-0098). 17. As to claim 25, the first large category is associated with the frequency information regarding the feature classified into the first large category, wherein the second large category is associated with the frequency information regarding the feature classified into the second large category (Paragraphs 0048-0051, 0085, 0094-0098). 18. As to claim 26, the frequency information associated with the first large category is a sum of the frequency information associated with each of the plurality of first small categories, wherein the frequency information associated with the second large category is a sum of the frequency information associated with each of the plurality of second small categories (Paragraphs 0048-0051, 0085, 0094-0098). 16Claim Rejections – 35 USC § 103 19. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 20. Claim 6 is rejected under 35 U.S.C 103 as being unpatentable over Matsumoto (US PGPub 2011/0221922) [hereafter Matsumoto] in view of Shimamura (US PGPub 2010/0157084) [hereafter Shim]. 21. As to claim 6, it is noted that that Matsumoto fails to particularly disclose the processor displays an object indicating a designated imaging range determined in accordance with a given instruction from an outside and an object indicating the target category subject in different display aspects. On the other hand, Shim discloses a processor (image processing unit 18 as shown in Figure 2) that displays an object (focus marks F11-F13 surrounding faces detected according to a user activating a correction mode as shown in Figures 10-11) indicating a designated imaging range determined in accordance with a given instruction (execution of a correction mode initiated by user operation) from an outside and an object (templates M111-M122 or an enlarged partial image area A21 for faces satisfying user selected settings shown in Figure 3) indicating the target category subject (detected faces meeting user selected settings shown in Figure 3) in different display aspects (Paragraphs 0033-0035, 0041, 0048-0053, 0057-0061, 0091-0095). It would have been obvious to one having ordinary skill in the art before the effective filing date of the application/invention to include displaying an object indicating a designated imaging range determined in accordance with a given instruction from an outside and an object indicating the target category subject in different display aspects as taught by Shim with the imaging support device of Matsumoto because the cited prior art are directed towards image capturing devices that detect facial feature information and display distinguishing information relating to various characteristics of the detected facial feature information and because the claimed limitations are fully disclosed within the cited prior art references and would yield predictable results of enabling a user to more easily recognize which imaged faces being displayed meet the criteria selected by the user for being labeled a target category subject in relation to the imaging range including detected faces that do not meet said criteria. Claims 22. Claim 27 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 23. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL S OSINSKI whose telephone number is (571) 270-3949. The examiner can normally be reached on Monday - Thursday, 10:00am - 6:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MO /MICHAEL S OSINSKI/Primary Examiner, Art Unit 2674 2/6/2026
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Jul 11, 2025
Non-Final Rejection — §102, §103
Oct 15, 2025
Response Filed
Oct 29, 2025
Final Rejection — §102, §103
Feb 02, 2026
Request for Continued Examination
Feb 06, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596951
MULTISCALE CONTIGUOUS BLOCK PIXEL ENTANGLER FOR IMAGE RECOGNITION ON HYBRID QUANTUM-CLASSICAL COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12586351
STORAGE MEDIUM, SPECIFYING METHOD, AND INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579657
IMAGING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573028
NEURAL NETWORK FOR IMAGE REGISTRATION AND IMAGE SEGMENTATION TRAINED USING A REGISTRATION SIMULATOR
2y 5m to grant Granted Mar 10, 2026
Patent 12554796
OPTIMIZING PARAMETER ESTIMATION FOR TRAINING NEURAL NETWORKS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
98%
With Interview (+23.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month