Prosecution Insights
Last updated: April 19, 2026
Application No. 18/284,073

IMAGING DEVICE AND IMAGING SYSTEM

Non-Final OA §102§103§112
Filed
Dec 15, 2023
Examiner
AGGARWAL, YOGESH K
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Nikon Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
998 granted / 1113 resolved
+27.7% vs TC avg
Moderate +7% lift
Without
With
+6.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
32 currently pending
Career history
1145
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
36.4%
-3.6% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1113 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an analyzing part”, “a distance measurement part”, “a virtual image generating part” in claims 11, 12, 13-16, 28-32. These are shown in the specification at Paragraphs 73 and 172 in figs. 1 and 7. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 8, 9, 10, 12 and 29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Claims 8, 9, 10, 12 and 29 recites the limitation "the driving part” in lines 2, 3, 1, 2 and 2 respectively. There is insufficient antecedent basis for this limitations in the claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 4, 6, 7 and 8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Oe et al. (US Patent # 9,910,251). [Claim 1] An imaging device comprising: a first optical system (fig. 1A, primary lens group 11) configured to form an intermediate image of a subject (col. 4 lines 13-16, The lens system 10 is composed, in order from the input side (object side, incident side) 90a, of a first lens group (primary lens group) 11 that forms light from the input side 90a into an image as an intermediate image 51); a second optical system (fig. 1a, secondary lens group 12) that is an optical system configured to form a final image by re-imaging at least a part of the intermediate image and that is configured to change a magnification of the final image (col. 4 lines 10-19, The lens system 10 is composed, in order from the input side (object side, incident side) 90a, of a first lens group (primary lens group) 11 that forms light from the input side 90a into an image as an intermediate image 51 and a second lens group (secondary lens group, relay lens group) 12 that forms light from the intermediate image 51 into an image as the final image 52. Col. 6 lines 1-15, With this lens system 10, when zooming from the wide-angle end to the telephoto end, while all of the lenses L101 to L110 of the first lens group 11 and the lenses L201 to L203 and the lenses L209 to L214 of the second lens group 12 do not move, zooming is carried out by monotonously moving the optical systems Z1 to Z3 along the optical axis 100 from the output side 90b toward the input side 90a. Accordingly, there is no fluctuation in the image formation position of the intermediate image 51 due to zooming, and at each zoom position from the wide-angle end to the telephoto end, it is possible to prevent the plane of the intermediate image from becoming positioned at a lens surface or inside a lens. This means that it is possible to suppress scratches or foreign matter such as dust on lens surfaces from appearing in the final image 52) {Therefore an imaging magnification of the second optical system 12 (a magnification of the final image 52 with respect to the intermediate image 51, i.e., a magnification of the final image 52) is changed}; and an imaging element configured to image the final image (col. 4 lines 25-28, One example of the imaging device 50 is an image sensor (imaging element) such as a CCD or a CMOS that converts the final image 52 into an electrical signal (image data)). [Claim 2] The imaging device according to claim 1, wherein the first optical system is telecentric at a side of the intermediate image (col. 7 lines 21-28, In this lens system 10, the positive lens L109, both of whose surfaces S15 and S16 are aspherical, is disposed on the input side 90a of the first optical system F. This means that it is possible, using the positive lens L109, to output light flux that has been gathered from a wide angle by the negative lens L101 disposed closest to the input side 90a to the output side 90b in a state that is extremely close to telecentric.). [Claim 4] The imaging device according to claim 1, wherein the second optical system is telecentric at a side of the first optical system (col. 1 lines 60-65, By fixing the second subsystem with positive refractive power on the output side of the first subsystem, it is possible to output light flux dispersed on the output side of the intermediate image to the output side in a state that is extremely close to telecentric. This means that it is possible to move the lens of the third subsystem relative to light flux that is incident in a state that is close to telecentric). [Claim 6] The imaging device according to claim 1, wherein a difference between an angle of a main beam of light from the first optical system with respect to the optical axis of the first optical system and an angle of a main beam of light entering the second optical system with respect to an optical axis of the second optical system is 1° or less (fig. 1 clearly shows that the main beams of light from first optical system 60 to the second optical system 66 are almost parallel which means that the difference is almost zero degree or less than 1 degree. By fixing the second subsystem with positive refractive power on the output side of the first subsystem, it is possible to output light flux dispersed on the output side of the intermediate image to the output side in a state that is extremely close to telecentric. This means that it is possible to move the lens of the third subsystem relative to light flux that is incident in a state that is close to telecentric, col. 1 lines 60-65). [Claim 7] The imaging device according to claim 1, wherein a maximum viewing angle of the first optical system is 170° or more (col. 5 lines 35-41, With this lens system 10, light flux that is taken in from a wide area (a wide angle) via the negative lens L101, which is disposed closest to the input side 90a and has the largest effective diameter, is guided by the first lens group 11 across the optical axis 100 to the opposite side (the second region 100b) to form an intermediate image (primary image formation) which is the inverted image 51. Wide angle lens systems are known to have a range of angles more than 170 degree or higher) . [Claim 8] The imaging device according to claim 1, wherein, when a driving part is set as a first driving part (inherently taught), the second optical system includes a plurality of optical members (fig. 1, Z1, Z2, Z3), and further includes a second driving part (inherently taught) configured to move at least one optical member of the plurality of optical members along an optical axis of the second optical system, and the second optical system changes the magnification of the final image by driving the second driving part (col. 6 lines 1-7, With this lens system 10, when zooming from the wide-angle end to the telephoto end, while all of the lenses L101 to L110 of the first lens group 11 and the lenses L201 to L203 and the lenses L209 to L214 of the second lens group 12 do not move, zooming is carried out by monotonously moving the optical systems Z1 to Z3 along the optical axis 100 from the output side 90b toward the input side 90a). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 9, 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251) in view of Shinzato (US PGPUB 20190103822). [Claim 9] Oe fails to teach a holding part configured to hold the second optical system and the imaging element, wherein the driving part moves the holding part in a direction intersecting the optical axis of the first optical system. However Shinzato teaches that in a case where a camera shake correction lens is built in (included) the lens barrel 740 or the optical system of the imaging apparatus, the vibration wave actuator 10 according to the above-described exemplary embodiment is applicable as a driving unit for a camera shake correction unit for moving the camera shake correction lens in a directions orthogonal to the optical axis of the optical system. In this case, to allow a lens holding member to move in two directions perpendicularly intersecting with each other in a plane perpendicularly intersecting with the optical axis direction, one or a plurality of vibration wave actuator units 10 for driving the lens holding member for each direction is disposed. Instead of driving the camera shake correction lens, the camera shake correction unit may move the image sensor 710 (built in the imaging apparatus main body) in directions orthogonal to the optical axis of the optical system. (Paragraph 80). Therefore taking the combined teachings of Oe and Shinzato, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have to hold the second optical system and the imaging element, wherein the driving part moves the holding part in a direction intersecting the optical axis of the first optical system in order to drive the lens for zooming and shake correction purposes during zooming thereby getting less blurred images. [Claim 10] Oe fails to teach wherein the driving part moves the second optical system and the imaging element in a direction parallel to a surface perpendicular to the optical axis of the first optical system. However Shinzato teaches that in a case where a camera shake correction lens is built in (included) the lens barrel 740 or the optical system of the imaging apparatus, the vibration wave actuator 10 according to the above-described exemplary embodiment is applicable as a driving unit for a camera shake correction unit for moving the camera shake correction lens in a directions orthogonal to the optical axis of the optical system. In this case, to allow a lens holding member to move in two directions perpendicularly intersecting with each other in a plane perpendicularly intersecting with the optical axis direction, one or a plurality of vibration wave actuator units 10 for driving the lens holding member for each direction is disposed. Instead of driving the camera shake correction lens, the camera shake correction unit may move the image sensor 710 (built in the imaging apparatus main body) in directions orthogonal to the optical axis of the optical system. (Paragraph 80). Therefore taking the combined teachings of Oe and Shinzato, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have the driving part moves the second optical system and the imaging element in a direction parallel to a surface perpendicular to the optical axis of the first optical system in order to drive the lens for zooming and shake correction purposes during zooming thereby getting less blurred images. [Claim 19] Oe fails to teach a driving part configured to move the second optical system and the imaging element in a direction intersecting an optical axis of the first optical system. However Shinzato teaches that in a case where a camera shake correction lens is built in (included) the lens barrel 740 or the optical system of the imaging apparatus, the vibration wave actuator 10 according to the above-described exemplary embodiment is applicable as a driving unit for a camera shake correction unit for moving the camera shake correction lens in a directions orthogonal to the optical axis of the optical system. In this case, to allow a lens holding member to move in two directions perpendicularly intersecting with each other in a plane perpendicularly intersecting with the optical axis direction, one or a plurality of vibration wave actuator units 10 for driving the lens holding member for each direction is disposed. Instead of driving the camera shake correction lens, the camera shake correction unit may move the image sensor 710 (built in the imaging apparatus main body) in directions orthogonal to the optical axis of the optical system. (Paragraph 80). Therefore taking the combined teachings of Oe and Shinzato, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have a driving part configured to move the second optical system and the imaging element in a direction intersecting an optical axis of the first optical system in order to drive the lens for zooming and shake correction purposes during zooming thereby getting less blurred images. Claim(s) 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251) in view of Hsu (US PGPUB 20130250155). [Claim 11] Oe fails to teach an analyzing part configured to analyze an image data of the subject generated by the imaging of the imaging element. However Hsu teaches an image capture assembly 200 includes an image processor 202 which performs various image processing functions. The image processor 202 is typically a programmable image processor but could be, for example, a hard-wired custom integrated circuit (IC) processor, a general purpose microprocessor, or a combination of hard-wired custom IC and programmable processors. When the image capture assembly 200 is part of a multipurpose portable electronic device such as a mobile phone, smartphone or superphone, at least some of the functions of the image capture assembly 200 may be performed by the main processor 102 or 202 of the host electronic device 100 (Paragraph 29). Therefore taking the combined teachings of Oe and Hsu, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have analyze an image data of the subject generated by the imaging of the imaging element in order to generate an image according to the user’s needs by programming the processor. [Claim 12] Oe fails to teach a visual field controller configured to execute at least one of driving of the driving part and changing of the magnification of the final image based on a result analyzed by the analyzing part. However Hsu teaches an image processor 202 that analyses the digital captured image signal C using autofocus calculations (e.g., contrast maximization) (308B) and produces focus signals based on the analysis (e.g., the result of the autofocus calculations) which drive the focus adjuster 206 to move the zoom lens 204 to adjust the focus of the image (308C) (Paragraph 47). Therefore taking the combined teachings of Oe and Hsu, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have driven the driving part and changing of the magnification of the final image based on a result analyzed by the analyzing part in order to generate an optimally focused image is displayed on the display 112 of the electronic device as a post-capture preview image. Claim(s) 13 is rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251), Hsu (US PGPUB 20130250155) and in further view of Taniguchi (US PGPUB 20030093810). [Claim 13] Oe in view of Hsu fails to teach an imaging controller configured to control start and end of recording of the image data imaged by the imaging element based on a result analyzed by the analyzing part. However Taniguchi teaches "Metadata" refers to data describing information in various kinds accompanied by video data or a data analysis result, which is data separately independent of "video data" This embodiment describes, as metadata, video-data file name (identifiers), encode scheme kind, image size, video-recording start time, video-recording end time, each-scene start and end time, presence or absence of a moving object in a scene, link relationship between video data files and so on, according to a data description language analogous to XML (eXtensible Markup Language) or multimedia content describing specification such as MPEG7 (Paragraph 43). Therefore taking the combined teachings of Oe, Hsu and Taniguchi, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have an imaging controller configured to control start and end of recording of the image data imaged by the imaging element based on a result analyzed by the analyzing part in order to have the image data be recorded for the amount of time based on the image. Claim(s) 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251) in view of Masuda (US PGPUB 20210149048). [Claims 14 and 15] Oe fails to teach a distance measurement part configured to measure a distance to at least a part of the subject and wherein the distance measurement part includes a light emitting part configured to irradiate the subject with light, a light receiving part configured to receive light emitted from the light emitting part and reflected by at least a part of the subject; and a measurement part configured to measure a distance to at least a part of the subject according to a time until the light emitted from the light emitting part is reflected by at least the part of the subject and is received by the light receiving part. However Masuda teaches in FIG. 2A is a diagram describing operations of the distance measuring unit (TOF camera) 10. The light emitting unit 11 emits pulsed irradiation light 31 such as a laser from a light source such as a laser diode (LD) toward a subject 2. The light receiving unit 12 detects pulse-shaped reflected light 32 that is returned light of the irradiation light 31 reflected by the subject 2 (Paragraph 31) and FIG. 2B is a diagram describing an example of a calculation method at a distance measurement time. In the distance measurement, the distance L to the subject 2 can be obtained by L=Td×c/2 on the basis of a time difference Td between the irradiation light 31 and the reflected light 32 (where c is the speed of light) (Paragraph 32). Therefore taking the combined teachings of Oe and Masuda, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have a distance measurement part configured to measure a distance to at least a part of the subject and wherein the distance measurement part includes a light emitting part configured to irradiate the subject with light, a light receiving part configured to receive light emitted from the light emitting part and reflected by at least a part of the subject; and a measurement part configured to measure a distance to at least a part of the subject according to a time until the light emitted from the light emitting part is reflected by at least the part of the subject and is received by the light receiving part in order to reduce a peak value of current consumption without adding a new component. Claim(s) 16 is rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251) in view of Masuda (US PGPUB 20210149048) and in further view of Aagaard et al. (US PGPUB 20030210329). [Claim 16] Oe in view of Masuda fails to teach a virtual image generating part configured to generate image data when the subject is imaged from a position different from the position of the imaging device based on image data of the subject generated by the imaging of the imaging element and the distance to at least the part of the subject measured by the distance measurement part. However Aagaard teaches in FIG. 4, that camera 12 is the currently selected master camera and is initially focused on a target object, such as a football player at location 108. By using known locations of the slave cameras 14-70, the distance D1 from camera 12 to location 108 and the framing size (or zoom) information for camera 12, the master broadcaster computer can calculate the information needed to direct all of the additional (slave) cameras 14-70 to focus on the target object 108 and to adjust the frame size so that the target object appears to be substantially the same size in the video images produced by each of the cameras 12, 14-70. This calculated information is then sent to the camera control panels associated with each of the additional cameras, and the camera control panels move the cameras to the correct position (Paragraph 47). Therefore taking the combined teachings of Oe, Masuda and Aagaard, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have teach a virtual image generating part configured to generate image data when the subject is imaged from a position different from the position of the imaging device based on image data of the subject generated by the imaging of the imaging element and the distance to at least the part of the subject measured by the distance measurement part in order to move the cameras to the correct position thereby generating an accurate image. Claim(s) 17 is rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251) in view of Aagaard et al. (US PGPUB 20030210329). [Claim 17] Oe fails to teach wherein at least one image device of the plurality of the imaging devices images the subject using information from the imaging device different from the at least one imaging device. However Aagaard teaches in FIG. 4, that camera 12 is the currently selected master camera and is initially focused on a target object, such as a football player at location 108. By using known locations of the slave cameras 14-70, the distance D1 from camera 12 to location 108 and the framing size (or zoom) information for camera 12, the master broadcaster computer can calculate the information needed to direct all of the additional (slave) cameras 14-70 to focus on the target object 108 and to adjust the frame size so that the target object appears to be substantially the same size in the video images produced by each of the cameras 12, 14-70. This calculated information is then sent to the camera control panels associated with each of the additional cameras, and the camera control panels move the cameras to the correct position (Paragraph 47). Therefore taking the combined teachings of Oe and Aagaard, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have at least one image device of the plurality of the imaging devices images the subject using information from the imaging device different from the at least one imaging device in order to move the cameras to the correct position thereby generating an accurate image. Claim(s) 18, 20, 22, 24, 26 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251) in view of Matsuzawa et al. (US PGPUB 20130265467). [Claim 18] Oe fails to teach wherein the imaging device is configured to change a center position of a region formed as the final image in the intermediate image. However Matsuzawa teaches that when a user operates the operation unit 23a of the lens 20 so as to zoom in while a cutout image is obtained by the link processing unit 11e, the signal-processing unit 11a creates an image which gradually zooms in to the zoomed area A2 by electronic zoom, as shown in FIG. 11B, FIG. 11C, and FIG. 11D, as in the first embodiment, and displays the image as a cutout image on the display unit 18a. The process of this zoom-in is also obtained as a cutout image, and continuously recorded on the recording unit 17. When the zoomed area A2 is reached, the signal-processing unit 11a then stops zoom-in. As shown in FIG. 11D, a specification release key 112 is then indicated (Paragraph 71). Therefore taking the combined teachings of Oe and Matsuzawa, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have the imaging device is configured to change a center position of a region formed as the final image in the intermediate image in order to obtain the whole image that properly continues capturing the whole view as shown in fig. 11a while the cutout image zooms in with the interested object positioned in the center. [Claim 20] Oe teaches an imaging device comprising: a first optical system (fig. 1A, primary lens group 11) configured to form an intermediate image of a subject (col. 4 lines 13-16, The lens system 10 is composed, in order from the input side (object side, incident side) 90a, of a first lens group (primary lens group) 11 that forms light from the input side 90a into an image as an intermediate image 51); a second optical system (fig. 1a, secondary lens group 12) that is an optical system configured to form a final image by re-imaging at least a part of the intermediate image and that is configured to change a magnification of the final image (col. 4 lines 10-19, The lens system 10 is composed, in order from the input side (object side, incident side) 90a, of a first lens group (primary lens group) 11 that forms light from the input side 90a into an image as an intermediate image 51 and a second lens group (secondary lens group, relay lens group) 12 that forms light from the intermediate image 51 into an image as the final image 52. Col. 6 lines 1-15, With this lens system 10, when zooming from the wide-angle end to the telephoto end, while all of the lenses L101 to L110 of the first lens group 11 and the lenses L201 to L203 and the lenses L209 to L214 of the second lens group 12 do not move, zooming is carried out by monotonously moving the optical systems Z1 to Z3 along the optical axis 100 from the output side 90b toward the input side 90a. Accordingly, there is no fluctuation in the image formation position of the intermediate image 51 due to zooming, and at each zoom position from the wide-angle end to the telephoto end, it is possible to prevent the plane of the intermediate image from becoming positioned at a lens surface or inside a lens. This means that it is possible to suppress scratches or foreign matter such as dust on lens surfaces from appearing in the final image 52) {Therefore an imaging magnification of the second optical system 12 (a magnification of the final image 52 with respect to the intermediate image 51, i.e., a magnification of the final image 52) is changed}; and an imaging element configured to image the final image (col. 4 lines 25-28, One example of the imaging device 50 is an image sensor (imaging element) such as a CCD or a CMOS that converts the final image 52 into an electrical signal (image data)). Oe fails to teach wherein the imaging device is configured to change a center position of a region formed as the final image in the intermediate image. However Matsuzawa teaches when a user operates the operation unit 23a of the lens 20 so as to zoom in while a cutout image is obtained by the link processing unit 11e, the signal-processing unit 11a creates an image which gradually zooms in to the zoomed area A2 by electronic zoom, as shown in FIG. 11B, FIG. 11C, and FIG. 11D, as in the first embodiment, and displays the image as a cutout image on the display unit 18a. The process of this zoom-in is also obtained as a cutout image, and continuously recorded on the recording unit 17. When the zoomed area A2 is reached, the signal-processing unit 11a then stops zoom-in. As shown in FIG. 11D, a specification release key 112 is then indicated (Paragraph 71). Therefore taking the combined teachings of Oe and Matsuzawa, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have the imaging device is configured to change a center position of a region formed as the final image in the intermediate image in order to obtain the whole image that properly continues capturing the whole view as shown in fig. 11a while the cutout image zooms in with the interested object positioned in the center. [Claim 22] Oe teaches wherein the first optical system is telecentric at a side of the intermediate image (col. 7 lines 21-28, In this lens system 10, the positive lens L109, both of whose surfaces S15 and S16 are aspherical, is disposed on the input side 90a of the first optical system F. This means that it is possible, using the positive lens L109, to output light flux that has been gathered from a wide angle by the negative lens L101 disposed closest to the input side 90a to the output side 90b in a state that is extremely close to telecentric.). [Claim 24] Oe teaches wherein the second optical system is telecentric at a side of the first optical system (col. 1 lines 60-65, By fixing the second subsystem with positive refractive power on the output side of the first subsystem, it is possible to output light flux dispersed on the output side of the intermediate image to the output side in a state that is extremely close to telecentric. This means that it is possible to move the lens of the third subsystem relative to light flux that is incident in a state that is close to telecentric). [Claim 26] Oe teaches wherein a difference between an angle of a main beam of light from the first optical system with respect to the optical axis of the first optical system and an angle of a main beam of light entering the second optical system with respect to an optical axis of the second optical system is 1° or less (fig. 1 clearly shows that the main beams of light from first optical system 60 to the second optical system 66 are almost parallel which means that the difference is almost zero degree or less than 1 degree. By fixing the second subsystem with positive refractive power on the output side of the first subsystem, it is possible to output light flux dispersed on the output side of the intermediate image to the output side in a state that is extremely close to telecentric. This means that it is possible to move the lens of the third subsystem relative to light flux that is incident in a state that is close to telecentric, col. 1 lines 60-65). [Claim 27] Oe teaches wherein a maximum viewing angle of the first optical system is 170° or more (col. 5 lines 35-41, With this lens system 10, light flux that is taken in from a wide area (a wide angle) via the negative lens L101, which is disposed closest to the input side 90a and has the largest effective diameter, is guided by the first lens group 11 across the optical axis 100 to the opposite side (the second region 100b) to form an intermediate image (primary image formation) which is the inverted image 51. Wide angle lens systems are known to have a range of angles more than 170 degree or higher). Claim(s) 21 is rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251), Matsuzawa et al. (US PGPUB 20130265467) and in further view of Hsu (US PGPUB 20130250155). [Claim 21] Oe in view of Matsuzawa fails to teach a driving part configured to move the second optical system and the imaging element in a direction intersecting an optical axis of the first optical system. However Shinzato teaches that in a case where a camera shake correction lens is built in (included) the lens barrel 740 or the optical system of the imaging apparatus, the vibration wave actuator 10 according to the above-described exemplary embodiment is applicable as a driving unit for a camera shake correction unit for moving the camera shake correction lens in a directions orthogonal to the optical axis of the optical system. In this case, to allow a lens holding member to move in two directions perpendicularly intersecting with each other in a plane perpendicularly intersecting with the optical axis direction, one or a plurality of vibration wave actuator units 10 for driving the lens holding member for each direction is disposed. Instead of driving the camera shake correction lens, the camera shake correction unit may move the image sensor 710 (built in the imaging apparatus main body) in directions orthogonal to the optical axis of the optical system. (Paragraph 80). Therefore taking the combined teachings of Oe, Matsuzawa and Shinzato, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have a driving part configured to move the second optical system and the imaging element in a direction intersecting an optical axis of the first optical system in order to drive the lens for zooming and shake correction purposes during zooming thereby getting less blurred images. Claim(s) 28 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251), Matsuzawa et al. (US PGPUB 20130265467) and in further view of Hsu (US PGPUB 20130250155). [Claim 28] Oe in view of Matsuzawa fails to teach an analyzing part configured to analyze an image data of the subject generated by the imaging of the imaging element. However Hsu teaches The image capture assembly 200 includes an image processor 202 which performs various image processing functions described below. The image processor 202 is typically a programmable image processor but could be, for example, a hard-wired custom integrated circuit (IC) processor, a general purpose microprocessor, or a combination of hard-wired custom IC and programmable processors. When the image capture assembly 200 is part of a multipurpose portable electronic device such as a mobile phone, smartphone or superphone, at least some of the functions of the image capture assembly 200 may be performed by the main processor 102 of the host electronic device 100. It is contemplated that all of the functions performed by the image processor 202 could be performed by the main processor 102, in which case the image processor 202 can be omitted (Paragraph 29). Therefore taking the combined teachings of Oe, Matsuzawa and Hsu, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have analyze an image data of the subject generated by the imaging of the imaging element in order to generate an image according to the user’s needs by programming the processor. [Claim 29] Oe in view of Matsuzawa fails to teach a visual field controller configured to execute at least one of driving of the driving part and changing of the magnification of the final image based on a result analyzed by the analyzing part. However Hsu teaches The image processor 202 then analyses the digital captured image signal C using autofocus calculations (e.g., contrast maximization) (308B) and produces focus signals based on the analysis (e.g., the result of the autofocus calculations) which drive the focus adjuster 206 to move the zoom lens 204 to adjust the focus of the image (308C) (Paragraph 47). Therefore taking the combined teachings of Oe, Matsuzawa and Hsu, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have driven the driving part and changing of the magnification of the final image based on a result analyzed by the analyzing part in order to generate an optimally focused image is displayed on the display 112 of the electronic device as a post-capture preview image. Claim(s) 30 is rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251), Matsuzawa et al. (US PGPUB 20130265467) and in further view of Taniguchi (US PGPUB 20030093810). [Claim 30] Oe in view of Matsuzawa fails to teach an imaging controller configured to control start and end of recording of the image data imaged by the imaging element based on a result analyzed by the analyzing part. However Taniguchi teaches "Metadata" refers to data describing information in various kinds accompanied by video data or a data analysis result, which is data separately independent of "video data" This embodiment describes, as metadata, video-data file name (identifiers), encode scheme kind, image size, video-recording start time, video-recording end time, each-scene start and end time, presence or absence of a moving object in a scene, link relationship between video data files and so on, according to a data description language analogous to XML (eXtensible Markup Language) or multimedia content describing specification such as MPEG7 (Paragraph 43). Therefore taking the combined teachings of Oe, Matsuzawa and Taniguchi, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have an imaging controller configured to control start and end of recording of the image data imaged by the imaging element based on a result analyzed by the analyzing part in order to have the image data be recorded for the amount of time based on the image. Claim(s) 31 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251), Matsuzawa et al. (US PGPUB 20130265467) and in further view of Masuda (US PGPUB 20210149048). [Claims 31 and 32] Oe in view of Matsuzawa fails to teach a distance measurement part configured to measure a distance to at least a part of the subject and wherein the distance measurement part includes a light emitting part configured to irradiate the subject with light, a light receiving part configured to receive light emitted from the light emitting part and reflected by at least a part of the subject; and a measurement part configured to measure a distance to at least a part of the subject according to a time until the light emitted from the light emitting part is reflected by at least the part of the subject and is received by the light receiving part. However Masuda teaches in FIG. 2A is a diagram describing operations of the distance measuring unit (TOF camera) 10. The light emitting unit 11 emits pulsed irradiation light 31 such as a laser from a light source such as a laser diode (LD) toward a subject 2. The light receiving unit 12 detects pulse-shaped reflected light 32 that is returned light of the irradiation light 31 reflected by the subject 2 (Paragraph 31) and FIG. 2B is a diagram describing an example of a calculation method at a distance measurement time. In the distance measurement, the distance L to the subject 2 can be obtained by L=Td×c/2 on the basis of a time difference Td between the irradiation light 31 and the reflected light 32 (where c is the speed of light) (Paragraph 32). Therefore taking the combined teachings of Oe, Matsuzawa and Masuda, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have a distance measurement part configured to measure a distance to at least a part of the subject and wherein the distance measurement part includes a light emitting part configured to irradiate the subject with light, a light receiving part configured to receive light emitted from the light emitting part and reflected by at least a part of the subject; and a measurement part configured to measure a distance to at least a part of the subject according to a time until the light emitted from the light emitting part is reflected by at least the part of the subject and is received by the light receiving part in order to reduce a peak value of current consumption without adding a new component. Claim(s) 33 is rejected under 35 U.S.C. 103 as being unpatentable over Oe et al. (US Patent # 9,910,251), Matsuzawa et al. (US PGPUB 20130265467) and in further view of Aagaard et al. (US PGPUB 20030210329). [Claim 33] Oe in view of Matsuzawa fails to teach wherein at least one image device of the plurality of the imaging devices images the subject using information from the imaging device different from the at least one imaging device. However Aagaard teaches in FIG. 4, that camera 12 is the currently selected master camera and is initially focused on a target object, such as a football player at location 108. By using known locations of the slave cameras 14-70, the distance D1 from camera 12 to location 108 and the framing size (or zoom) information for camera 12, the master broadcaster computer can calculate the information needed to direct all of the additional (slave) cameras 14-70 to focus on the target object 108 and to adjust the frame size so that the target object appears to be substantially the same size in the video images produced by each of the cameras 12, 14-70. This calculated information is then sent to the camera control panels associated with each of the additional cameras, and the camera control panels move the cameras to the correct position (Paragraph 47). Therefore taking the combined teachings of Oe, Matsuzawa and Aagaard, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have at least one image device of the plurality of the imaging devices images the subject using information from the imaging device different from the at least one imaging device in order to move the cameras to the correct position thereby generating an accurate image. Allowable Subject Matter Claims 3, 5, 23 and 25 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art fails to teach as recited in claims 3 and 23, “wherein a difference between an angle of a main beam of light, which advances from the first optical system toward a first place on an intermediate image forming region, with respect to the optical axis of the first optical system and an angle of a main beam of light, which advances from the first optical system toward a second place on the intermediate image forming region at which a distance from the optical axis of the first optical system is different from the first place, with respect to the optical axis of the first optical system is 1° or less” and as recited in claims 5 and 25, “wherein a difference between an angle of a main beam of light, which advances from a first place on an intermediate image forming region toward the second optical system, with respect to an optical axis of the second optical system and an angle of a main beam of light, which advances from a second place on the intermediate image forming region, at which a distance from the optical axis of the second optical system is different from the first place, toward the second optical system, with respect to the optical axis of the second optical system is 1° or less”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOGESH K AGGARWAL whose telephone number is (571)272-7360. The examiner can normally be reached Monday - Friday 9:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 5712727564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YOGESH K AGGARWAL/Primary Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
May 20, 2024
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604079
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604100
IMAGE PROCESSING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598265
COOPERATIVE PHOTOGRAPHING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12587735
IMAGING APPARATUS, METHOD FOR CONTROLLING THE SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579842
METHOD FOR ADAPTING THE QUALITY AND/OR FRAME RATE OF A LIVE VIDEO STREAM BASED UPON POSE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
96%
With Interview (+6.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month