Prosecution Insights
Last updated: April 19, 2026
Application No. 18/945,737

IMAGING DEVICE, IMAGING INSTRUCTION METHOD, AND IMAGING INSTRUCTION PROGRAM

Non-Final OA §103§DP
Filed
Nov 13, 2024
Examiner
TRAN, LOI H
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Fujifilm Corporation
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+6.5% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Double Patenting 3. Claims 21-33 and 35-36 are rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,170,838. Although the conflicting claims are not identical, they are not patentably distinct from each other because Claims 1-20 of Patent #12,170,838 contain every element of claims 21-33 and 35-36 of the instant application and thus anticipate the claims of the instant application. Claims 21-33 and 35-36 of the instant application therefore are not patently distinct from the earlier patent claims and as such are unpatentable over obvious-type double patenting. A later patent/application claim is not patentably distinct from an earlier claim if the later claim is anticipated by the earlier claim. The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to: www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim Rejections - 35 USC § 103 4. he following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1,148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 5. Claims 21, 27-28, 31, and 33-36 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Kimura et al. (US Publication 2017/0236552) in view of Yoshizawa (English Translation of Japanese Publication JP 2008-072491 03-2008). Regarding claim 21, Kimura discloses an imaging device comprising: an imaging unit having a single imaging element; and a processor including a single signal-processing circuit (Kimura, fig. 2, imaging unit 184, single signal processing circuit is composed of sub units 187 and 188) configured to: receive a first instruction to issue an instruction to set a second imaging parameter different from a first imaging parameter for capturing of a first moving image based on the first imaging parameter and capturing of a second moving image based on the second imaging parameter by the single imaging element (Kimura, fig. 19, para’s 0138-0139, setting different imaging conditions “first instruction” indicating ISO sensitivity, shutter speed “imaging parameter” for capturing each of picture B and picture A, which are moving images); receive a second instruction to start capturing of a second moving image based on the second imaging parameter after the reception of the first instruction to cause the single imaging element to start the capturing of the second moving image (Kimura, fig. 16, para’s 0129-0134, when a switch ST turns ON “receive second instruction” during the period of the imaging and recording of picture B "first moving image", the imaging and recording of picture A “second moving image” are performed); cause, in a case where the second instruction is received, the single imaging element to capture the first moving image until a third instruction to end the capturing of the first moving image is received (Kimura, fig. 16, para’s 0129-0134 and 0231-0234, the recording of picture B continues even after the switch ST has turned on, and when the switch MV turns off afterwards “third instruction”, the imaging and recording of picture B are stopped); and cause, in a case where the third instruction is received, the single imaging element to end the capturing of the first moving image (Kimura, fig. 16, para’s 0129-0134 and 0231-0234, when the switch MV turns off afterwards, the imaging and recording of picture B are stopped). Kimura does not explicitly disclose but Yoshizawa discloses receive a first instruction to issue an instruction to set a second imaging parameter different from a first imaging parameter during capturing of a first moving image based on the first imaging parameter by the single imaging element (Yoshizawa, fig. 11, para’s 0032-0033, modifying image settings “first instruction” such as the sensitivity of an imaging element during imaging of a moving image, and moving image recording is performed using the modified setting; providing a function that allows the user to reliably and easily change settings while shooting video continuously). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Yoshizawa’s features into Kimura’s invention for enhancing user’s image capturing experience for generating quality images by allowing for modifying of image settings while capturing images. Regarding claim 27, Kimura-Yoshizawa discloses the imaging device according to claim 21, wherein the processor causes the single imaging element to end the capturing of the second moving image (Kimura, fig. 16, the imaging and recording of picture A are stopped at t37). Regarding claim 28, Kimura-Yoshizawa discloses the imaging device according to claim 21, wherein the processor causes the single imaging element to end the capturing of the second moving image and records the second moving image in association with the first moving image recorded from the reception of the second instruction to the reception of the third instruction (Kimura, fig. 16, para’s 0133-0134, during a period of time t33 to time t34 and a period of time t35 to time t36, the switch ST 154 used to shoot a still image is operated. Therefore, during these periods, image data on the “picture A” are also written onto the recording medium 193 after being subjected to predetermined signal processing. The image data on the “picture A” may also be written onto the recording medium 193 during the same period as that of the image data on the “picture B” in addition to the period of time t33 to time t34 and the period of time t35 to time t36. In both of the “picture A” and the “picture B,” it is assumed that each piece of image data recorded on the recording medium 193 is a moving image at the same frame rate, e.g., 60 fps, and the NTSC time code is added). Regarding claim 31, Kimura-Yoshizawa discloses the imaging device according to claim 21, wherein the processor records the first moving image and the second moving image on a single recording medium (Kimura, para. 0134, picture B and picture A are associated with each other and recorded on a recording medium 193). Regarding claim 33, Kimura-Yoshizawa discloses the imaging device according to claim 21, wherein the processor causes a display device to display the first moving image and the second moving image (Kimura, para’s 0062-0063, display device 153). Regarding claim 34, Kimura-Yoshizawa discloses the imaging device according to claim 21, wherein the processor is configured to, in a duration after the reception of the second instruction until the reception of the third instruction, perform image processing based on a first image processing parameter and image processing based on a second image processing parameter different from the first image processing parameter at different timings to obtain a moving image corresponding to the first image processing parameter and another moving image corresponding to the second image processing parameter (Kimura fig’s 6 and 12, para’s 0129-0134, during the period of t33 where Switch ST is turned ON and t37 where Switch MV is tuned OFF, recording moving picture A and picture B with different timings). Claims 35-36 comprising limitations substantially the same as claim 21 are rejected for the same reasons set forth. Kimura-Yoshizawa further disclose computer-readable medium (see Kimura para. 0334). 6. Claims 22, 25, and 32 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Kimura-Yoshizawa, as applied to claim 21 above, in view of Tsugita et al. (English Translation of Japanese Publication JP2008-288830 11-2008). Regarding claim 22, Kimura-Yoshizawa discloses the imaging device according to claim 21. Kimura-Yoshizawa does not explicitly disclose but Tsugita discloses wherein the processor receives an instruction by a user and situation data, which is regarded as the third instruction in a case where a predetermined condition is satisfied, as the third instruction (Tsugita, para’s 0002-0003, it is a common matter for an imaging device to automatically terminate imaging during the imaging of dynamic images when the free capacity of a recording medium is insufficient, and to read the free capacity of the recording medium, and calculate and display the possible imaging duration for dynamic images). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Tsugita’s features into Kimura-Yoshizawa’s invention for enhancing user’s image capturing experience for stopping image capturing in response to detecting insufficient memory resource. Regarding claim 25, Kimura-Yoshizawa-Tsugita discloses the imaging device according to claim 22, wherein in a case where a remaining capacity of a recording medium for recording the first moving image is smaller than a data size in a case where the first moving image is recorded for a remaining scheduled imaging time, the processor generates the situation data (Tsugita, para’s 0002-0003, it is a common matter for an imaging device to automatically terminate imaging during the imaging of dynamic images when the free capacity of a recording medium is insufficient, and to read the free capacity of the recording medium, and calculate and display the possible imaging duration for dynamic images). The motivation to combine the references and obviousness arguments are the same as claim 22. Regarding claim 32, Kimura-Yoshizawa discloses the imaging device according to claim 21. Kimura-Yoshizawa does not explicitly disclose but Tsugita discloses wherein the processor causes a display device to display a remaining recording time in a case where only one of the first moving image and the second moving image is recorded and a remaining recording time in a case where both the first moving image and the second moving image are recorded (Tsugita, para’s 0002-0003, it is a common matter for an imaging device to automatically terminate imaging during the imaging of dynamic images when the free capacity of a recording medium is insufficient, and to read the free capacity of the recording medium, and calculate and display the possible imaging duration for dynamic images). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Tsugita’s features into Kimura-Yoshizawa’s invention for enhancing user’s image capturing experience for displaying remaining recording time when at least one of a plurality of captured images are to be recorded. 7. Claims 29 and 30 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Kimura-Yoshizawa, as applied to claim 21 above, in view of Yoneda et al. (US Publication 2007/0024735). Regarding claim 29, Kimura-Yoshizawa discloses the imaging device according to claim 21, wherein the processor performs image processing and compression processing on single moving image data output from the single imaging element to generate first moving image data for the first moving image, and performs image processing and compression processing on the single moving image data to generate second moving image data for the second moving image (Kimura, fig’s 3 and 7, para’s 0064-0069, 0076-0079, an imaging element 184, read circuits 308A, 308B are respectively connected to signal output lines 304A, 304B of each column, and output from two systems, namely output data 282, 283, is obtained, and that the output data 282, 283 respectively serve as video signal picture A and B for prescribed signal processing and 0159-0164, performing image processing and compression processing to encode and generate pictures including I-frames; fig. 16, para’s 0129-0134, At time t32, the switch MV 155 as a moving image shooting button is operated by a user to be turned on to start imaging of “picture B” and imaging of “picture A” are started in response thereto. In response to operating the switch MV 155 as the button to shoot a moving image, image data on the “picture B” are written onto the recording medium 193 after being subjected to predetermined signal processing. The reason for imaging the “picture A” simultaneously with imaging the “picture B” is to active a crosstalk correction to be described later at all times. Since the transfer transistor 311A will be in the on-state unless the transfer pulse φTX1A illustrated in FIG. 13 is at the low level, the signal charge generated in the photodiode 310A is never accumulated. However, if only the period of operating the switch ST 154 is targeted for the crosstalk correction, the “picture B” recorded at the operating timing of the switch ST 154 will be subjected to delicate brightness variation or hue variation due to the influence of a crosstalk correction error. During a period of time t33 to time t34 and a period of time t35 to time t36, the switch ST 154 used to shoot a still image is operated. Therefore, during these periods, image data on the “picture A” are also written onto the recording medium 193 after being subjected to predetermined signal processing. The image data on the “picture A” may also be written onto the recording medium 193 during the same period as that of the image data on the “picture B” in addition to the period of time t33 to time t34 and the period of time t35 to time t36). Kimura-Yoshizawa does not explicitly disclose but Yoneda discloses generate first moving image data for the first moving image based on the first imaging parameter and generate second moving image data for the second moving image based on the second imaging parameter (Yoneda, para. 0009, color phase, sharpness, color gain, and white balance, and the like are used as parameter setting items for the imaging of dynamic images; para’s 0104-0106, parameter settings for the imaging of dynamic images are modified during imaging). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Yoneda’s features into Kimura-Yoshizawa’s invention for enhancing user’s image capturing experience by enabling imaging parameters to be modified during capturing for generating quality images. Regarding claim 30, Kimura-Yoshizawa-Yoneda discloses the imaging device according to claim 29, wherein the processor performs image processing and compression processing on single moving image data output from the single imaging element to generate first moving image data for the first moving image, and performs image processing and compression processing on the single moving image data to generate second moving image data for the second moving image; and wherein the processor generates and records I frames as the first moving image data and the second moving image data in a period in which the first moving image and the second moving image are recorded in parallel (Kimura, fig’s 3 and 7, para’s 0064-0069, 0076-0079, an imaging element 184, read circuits 308A, 308B are respectively connected to signal output lines 304A, 304B of each column, and output from two systems, namely output data 282, 283, is obtained, and that the output data 282, 283 respectively serve as video signal picture A and B for prescribed signal processing and 0159-0164, performing image processing and compression processing to encode and generate pictures including I-frames; fig. 16, para’s 0129-0134, At time t32, the switch MV 155 as a moving image shooting button is operated by a user to be turned on to start imaging of “picture B” and imaging of “picture A” are started in response thereto. In response to operating the switch MV 155 as the button to shoot a moving image, image data on the “picture B” are written onto the recording medium 193 after being subjected to predetermined signal processing. The reason for imaging the “picture A” simultaneously with imaging the “picture B” is to active a crosstalk correction to be described later at all times. Since the transfer transistor 311A will be in the on-state unless the transfer pulse φTX1A illustrated in FIG. 13 is at the low level, the signal charge generated in the photodiode 310A is never accumulated. However, if only the period of operating the switch ST 154 is targeted for the crosstalk correction, the “picture B” recorded at the operating timing of the switch ST 154 will be subjected to delicate brightness variation or hue variation due to the influence of a crosstalk correction error. During a period of time t33 to time t34 and a period of time t35 to time t36, the switch ST 154 used to shoot a still image is operated. Therefore, during these periods, image data on the “picture A” are also written onto the recording medium 193 after being subjected to predetermined signal processing. The image data on the “picture A” may also be written onto the recording medium 193 during the same period as that of the image data on the “picture B” in addition to the period of time t33 to time t34 and the period of time t35 to time t36; Yoneda, para. 0009, color phase, sharpness, color gain, and white balance, and the like are used as parameter setting items for the imaging of dynamic images; para’s 0104-0106, parameter settings for the imaging of dynamic images are modified during imaging). The motivation to combine the references and obviousness arguments are the same as claim 29. Allowable Subject Matter 8. Claims 23-24 and 26 contain allowable subject matter. It is noted that claims 23-24 and 26 are dependent upon a rejected base claim. Conclusion 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOI H TRAN whose telephone number is (571)270-5645. The examiner can normally be reached 8:00AM-5:00PM PST FIRST FRIDAY OF BIWEEK OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOI H TRAN/ Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Nov 13, 2024
Application Filed
Dec 23, 2024
Response after Non-Final Action
Feb 14, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598366
CONTENT DATA PROCESSING METHOD AND CONTENT DATA PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593112
METHOD, DEVICE, AND COMPUTER PROGRAM FOR ENCAPSULATING REGION ANNOTATIONS IN MEDIA TRACKS
2y 5m to grant Granted Mar 31, 2026
Patent 12592261
VIDEO EDITING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576798
CAMERA SYSTEM AND ASSISTANCE SYSTEM FOR A VEHICLE AND A METHOD FOR OPERATING A CAMERA SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579810
SYSTEM AND METHOD FOR AUTOMATIC EVENTS IDENTIFICATION ON VIDEO
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
88%
With Interview (+23.6%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month