DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to Application filed on 09/20/2024
Application claims a FP date of 09/28/2023
Claims 1, 17-18 and 32-35 are independent
Claims 1-35 are pending
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in the instant Application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/04/2025 and 09/20/2024 are in compliance with the provisions of 37 CFR 1.97 and 37 CFR 1.98(a)(4). Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "control unit”, “determination unit” in claims 1-12, 15-16, 18-26, 28-31 and 33-36
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 10-12, 15-17, 33 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Omori et al. (U.S. Patent Publication Number 2021/0258504 A1) in view of Wakamatsu (U. S. Patent Publication Number 2021/0152731 A1).
Regarding Claim 1, Omori discloses a capture control apparatus (Title – control apparatus and method; and in the Abstract Omori discloses a control apparatus that includes a receiving unit that receives a captured image obtained by capturing an image of an image capture device), comprising:
one or more processors that execute a program stored in a memory (Fig 3A – control unit 302; CPU 303; RAM 301, ROM 305; In ¶0028, Omori discloses that the CPU 205 controls devices based on control program stored in ROM 206 and the external storage device 208) and thereby function as:
a control unit (Fig 1-6 – control apparatus 102) configured to control capture directions and angles of view of two or more sub cameras (Auxiliary cameras 104), among a plurality of cameras including a main camera (Image capturing device 103 is the main camera) and the sub cameras (In ¶0025, Omori discloses that the system controls a direction in which the image capture device 103 and auxiliary camera 104 captures images (pan and tilt operation) and drives the zoom lens of the image capture device 103), based on roles set on the sub cameras and on a subject of interest and an angle of view of the main camera (In ¶0044 Omori discloses that user can remotely designate which portion of a subject present in the images captured by the auxiliary camera); and
Omori discloses using control unit and CPU that controls the processing. However, Omori fails to clearly disclose a determination unit configured to determine whether the main camera satisfies a predetermined condition,
wherein in a case where it has been determined that the main camera satisfies the condition, the control unit selects at least one of the sub cameras, and changes content of control for the selected sub camera so as to track and capture the subject of interest of the main camera under a setting different from a setting of the main camera.
Instead in a similar endeavor, Wakamatsu discloses a determination unit (Fig 2 – image processing unit 207, control unit 223 – CPU 355) configured to determine whether the main camera satisfies a predetermined condition (In ¶0098-¶0102, Wakamatsu teaches that the angle of each of the image capturing apparatuses may be calculated based on results from different sensors and this is the “predetermined angle position” – Examiner would like to state that the “predetermined condition” has not been defined in the claim),
wherein in a case where it has been determined that the main camera (In ¶0193, Wakamatsu teaches that the control unit sets the image capturing apparatus 101 that is closest to the subject as a “main” image capturing apparatus) satisfies the condition, the control unit selects at least one of the sub cameras (In ¶0193, Wakamatsu also teaches that the control unit sets the other image capturing apparatuses as a sub image capturing apparatus), and changes content of control for the selected sub camera so as to track and capture the subject of interest of the main camera under a setting different from a setting of the main camera (Wakamatsu teaches this in the flow chart of Figs 21 and 22 and in corresponding disclosure ¶0194 - ¶0211 where he teaches that processing of the image capturing apparatuses and in step S2108 the automatic image capturing determination processing is performed. In steps S2203 and S2204 the angle of view of the sub image capturing apparatus is determined. Specifically the position coordinates of the subject of the image capturing apparatus 101 is set as the sub image capturing apparatus are calculated for setting the value of the coordinates of the subject which is within the angle of view of the image capturing apparatus is calculated so as to include important subject determination. Further in ¶0207 Wakamatsu teaches that a setting is performed in step S2308 to perform framing such that the same subject is detected by the image capturing apparatus and image capturing apparatus 1 and 2 each perform framing such that same subject is captured within the image).
Omori and Wakamatsu are combinable because both are related to plurality of image capturing apparatuses for automatic subject detection.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the automatic image capturing determination processing as taught by Wakamatsu in the imaging module disclosed by Omori.
The suggestion/motivation for doing so would have been to “control plurality of image capturing apparatuses so that they work in conjunction with each other” as disclosed by Wakamatsu in ¶0009.
Therefore, it would have been obvious to combine Omori and Wakamatsu to obtain the invention as specified in claim 1.
Regarding Claim 2, Omori in view of Wakamatsu discloses wherein in a case where it has been determined that the main camera (Wakamatsu: In ¶0193, Wakamatsu teaches that the control unit sets the image capturing apparatus 101 that is closest to the subject as a “main” image capturing apparatus) satisfies the condition, the control unit controls the angle of view of the selected sub camera (Wakamatsu: In ¶0193, Wakamatsu also teaches that the control unit sets the other image capturing apparatuses as a sub image capturing apparatus) in phase with the angle of view of the main camera (Wakamatsu: Wakamatsu teaches this in ¶0207 where he teaches that a setting is performed on the image capturing apparatus 2 (sub image capturing apparatus) such that the same subject as that detected by image capturing apparatus 1 is detected by image capturing apparatus 2).
Regarding Claim 3, Omori in view of Wakamatsu discloses wherein the determination unit makes the determination with respect to a plurality of conditions, and the plurality of conditions include one or more of:
one or more conditions related to luminance of a video captured by the main camera;
one or more conditions related to one of more of luminance, a position (Wakamatsu: In ¶0206, Wakamatsu teaches that the control unit calculates the position coordinates of the subject on the layout coordinates based on the distance from the image capturing apparatus 1 to the designated subject), a moving speed, a size, and a depth of field of a subject region in the video captured by the main camera;
one or more conditions related to a movement of the main camera (Wakamatsu: See ¶0087); and
one or more conditions related to white balance of the main camera.
Regarding Claim 4, Omori in view of Wakamatsu discloses wherein the control unit causes a type of a setting that is different between the selected sub camera and the main camera to vary in accordance with a condition that is satisfied by the main camera among the plurality of conditions (Wakamatsu: This is taught by Wakamatsu in ¶0218 where he teaches that the automatic capturing processing using the plurality of image capturing apparatuses by varying the angle of view adjustment and image of a subject framed under different conditions such as size of the subject.).
Regarding Claim 10, Omori in view of Wakamatsu discloses wherein the control unit selects a sub camera that has been set in advance from among the sub cameras (Wakamatsu: This is taught in step S2203 in the flow chart of Fig 22).
Regarding Claim 11, Omori in view of Wakamatsu discloses wherein the control unit selects a sub camera other than a sub camera that is currently capturing a video selected by an external apparatus from among the sub cameras (Wakamatsu: In ¶0203, Wakamatsu teaches that all installed image capturing apparatuses may be selected as the image capturing apparatus to simultaneously perform image capturing.. Wakamatsu specifically teaches that image capturing apparatus that has not been used may be preferentially selected as the image capturing apparatus to perform image capturing).
Regarding Claim 12, Omori in view of Wakamatsu discloses wherein the control unit selects a sub camera capable of performing image capture with a composition similar to a composition of the main camera from among the sub cameras (Wakamatsu: This is taught in steps S2202 and S2203. In step S2202, it is determined if the subject is included in the image and if the result is positive, the sub image capturing apparatus is selected – which implies that the sub image camera performs image capturing of similar scene or composition).
Regarding Claim 15, Omori in view of Wakamatsu discloses wherein in a case where the sub cameras do not include a sub camera capable of performing image capture with a composition similar to a composition of the main camera, the control unit selects a sub camera with a lowest predetermined priority order from among the sub cameras (Wakamatsu: This is taught in ¶0137 - ¶0145 and in flow chart of Fig 18 – step S1806 and S1807).
Regarding Claim 16, Omori in view of Wakamatsu discloses wherein in a case where the control unit has selected a plurality of sub cameras, the control unit causes content of control to vary with each of the selected sub cameras (Wakamatsu: In ¶0203 Wakamatsu teaches that in step S2203 all image capturing apparatuses having distance of the subject less than or equal to a predetermined value may be selected as the image capturing apparatus to simultaneously perform image capturing. See Fig 23 and corresponding disclosure).
Regarding Claim 17, this claim is a methods claim that has limitation parallel to the apparatus claim of Claim 1. Claim 17 is rejected on the same grounds as Claim 1.
Regarding Claim 33, this claim is a systems claim that has limitation parallel to the apparatus claim of Claim 1. Claim 33 is rejected on the same grounds as Claim 1.
Regarding Claim 35, this claim is a program claim that has limitation parallel to the apparatus claim of Claim 1. Claim 35 is rejected on the same grounds as Claim 1.
Claims 18-19, 23-26, 31-32, 34 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Omori et al. (U.S. Patent Publication Number 2021/0258504 A1) in view of Wakamatsu (U. S. Patent Publication Number 2021/0152731 A1) and further in view of Nemeth et al. (U.S. Patent Publication Number 2021/0314497 A1)
Regarding Claim 18, Omori discloses a capture control apparatus (Title – control apparatus and method; and in the Abstract Omori discloses a control apparatus that includes a receiving unit that receives a captured image obtained by capturing an image of an image capture device), comprising:
one or more processors that execute a program stored in a memory (Fig 3A – control unit 302; CPU 303; RAM 301, ROM 305; In ¶0028, Omori discloses that the CPU 205 controls devices based on control program stored in ROM 206 and the external storage device 208) and thereby function as:
a control unit (Fig 1-6 – control apparatus 102) configured to control capture directions and angles of view of two or more sub cameras (Auxiliary cameras 104), among a plurality of cameras including a main camera (Image capturing device 103 is the main camera) and the sub cameras (In ¶0025, Omori discloses that the system controls a direction in which the image capture device 103 and auxiliary camera 104 captures images (pan and tilt operation) and drives the zoom lens of the image capture device 103), based on roles set on the sub cameras and on a subject of interest and an angle of view of the main camera (In ¶0044 Omori discloses that user can remotely designate which portion of a subject present in the images captured by the auxiliary camera); and
Omori discloses using control unit and CPU that controls the processing but fails to clearly disclose a determination unit configured to determine whether a camera in a first state that is not performing an operation associated with the role set thereon exists among the plurality of cameras.
Instead in a similar endeavor, Nemeth discloses a determination unit (Fig 1 – image processing unit 111, 112) configured to determine whether a camera in a first state that is not performing an operation associated with the role set thereon exists among the plurality of cameras (In ¶0026, Nemeth also teaches that the method relates for determining an error of a camera monitoring system and in ¶0018 he teaches about a warning that can be output from the defective image processing unit and relayed to the intact image processing unit).
Omori and Nemeth are combinable because both are related to plurality of image capturing apparatuses for automatic subject detection.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to determine defective image processing unit as taught by Nemeth in the imaging module disclosed by Omori.
The suggestion/motivation for doing so would have been to “determine an error in a camera monitoring system and in particular to an error resistance architecture for a camera system” as disclosed by Nemeth in ¶0001.
However, Omori in view of Nemeth fails to clearly disclose a determination unit, wherein in a case where it has been determined that the main camera satisfies the condition, the control unit selects at least one of the sub cameras, and changes content of control for the selected sub camera so as to track and capture the subject of interest of the main camera under a setting different from a setting of the main camera.
Instead in a similar endeavor, Wakamatsu discloses a determination unit (Fig 2 – image processing unit 207, control unit 223 – CPU 355; In ¶0098-¶0102, Wakamatsu teaches that the angle of each of the image capturing apparatuses may be calculated based on results from different sensors and this is the “predetermined angle position” – Examiner would like to state that the “predetermined condition” has not been defined in the claim),
wherein in a case where it has been determined that the main camera (In ¶0193, Wakamatsu teaches that the control unit sets the image capturing apparatus 101 that is closest to the subject as a “main” image capturing apparatus) satisfies the condition, the control unit selects at least one of the sub cameras (In ¶0193, Wakamatsu also teaches that the control unit sets the other image capturing apparatuses as a sub image capturing apparatus), and changes content of control for the selected sub camera so as to track and capture the subject of interest of the main camera under a setting different from a setting of the main camera (Wakamatsu teaches this in the flow chart of Figs 21 and 22 and in corresponding disclosure ¶0194 - ¶0211 where he teaches that processing of the image capturing apparatuses and in step S2108 the automatic image capturing determination processing is performed. In steps S2203 and S2204 the angle of view of the sub image capturing apparatus is determined. Specifically the position coordinates of the subject of the image capturing apparatus 101 is set as the sub image capturing apparatus are calculated for setting the value of the coordinates of the subject which is within the angle of view of the image capturing apparatus is calculated so as to include important subject determination. Further in ¶0207 Wakamatsu teaches that a setting is performed in step S2308 to perform framing such that the same subject is detected by the image capturing apparatus and image capturing apparatus 1 and 2 each perform framing such that same subject is captured within the image).
Omori, Nemeth and Wakamatsu are combinable because all are related to plurality of image capturing apparatuses for automatic subject detection.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the automatic image capturing determination processing as taught by Wakamatsu in the imaging module disclosed by Omori in view of Nemeth.
The suggestion/motivation for doing so would have been to “control plurality of image capturing apparatuses so that they work in conjunction with each other” as disclosed by Wakamatsu in ¶0009.
Therefore, it would have been obvious to combine Omori, Nemeth and Wakamatsu to obtain the invention as specified in claim 18.
Regarding Claim 19, Omori in view of Nemeth and Wakamatsu discloses wherein in a case where contents of control on the angle of view associated with the roles set on the camera in the first state and the selected camera do not match, the control unit changes the content of control for the selected camera to content of control that conforms with the role set on the camera in the first state (Nemeth: Nemeth’s teachings provide redundant image acquisition so that a failure of one component does not result in a total failure. For this purpose the architecture comprises two camera or camera units).
Regarding Claim 23, Omori in view of Nemeth and Wakamatsu discloses wherein the determination unit determines a camera that autonomously performs image capture different from a role set thereon as the camera in the first state (Nemeth: Nemeth teaches this in ¶0066 where he teaches the use of four different cameras that have different field of view and even if a failure occurs in these components, it can be ensured that image data can be displayed).
Regarding Claim 24, Omori in view of Nemeth and Wakamatsu discloses wherein the determination unit determines a camera in which an abnormality has been detected as the camera in the first state (Nemeth: Nemeth teaches this in ¶0086 where he teaches an algorithm in order to obtain a deviation of the image acquisition system).
Regarding Claim 25, Omori in view of Nemeth and Wakamatsu discloses wherein the determination unit determines a camera in which an operation performed by a unit that is not under management by the control unit has been detected as the camera in the first state (Nemeth: Nemeth teaches the use of two image processing units 111 and 112 and where the third camera unit 123 is controlled by the first image processing unit 11 and the fourth camera 124 is controlled by the second image processing unit 112).
Regarding Claim 26, Omori in view of Nemeth and Wakamatsu discloses wherein the control unit selects, from among cameras that are included among the plurality of cameras and are other than the camera in the first state, a camera which is similar to the camera in the first state in position and capability, and which has a predetermined priority order lower than a predetermined priority order of the camera in the first state (Wakamatsu: Wakamatsu teaches this in the flow chart of Fig 17 and corresponding disclosure).
Regarding Claim 31, Omori in view of Nemeth and Wakamatsu discloses wherein in a case where it has been determined that the camera in the first state exists, the control unit notifies a user of the existence (Omori: Omori discloses this in ¶0038; Wakamatsu: Wakamatsu discloses numerous instances of the camera procedures and notifies the user about it).
Regarding Claim 32, this claim is a methods claim that has limitation parallel to the apparatus claim of Claim 18. Claim 32 is rejected on the same grounds as Claim 18.
Regarding Claim 34, this claim is a systems claim that has limitation parallel to the apparatus claim of Claim 18. Claim 34 is rejected on the same grounds as Claim 18.
Regarding Claim 36, this claim is a program claim that has limitation parallel to the apparatus claim of Claim 18. Claim 36 is rejected on the same grounds as Claim 18.
Allowable Subject Matter
Claims 5-9, 13-14, 20-22 and 27-30 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reference Cited
The following prior art made of record but not relied upon is considered pertinent to applicant's disclosure.
Shimizu et al. (U.S. Patent Publication Number 2017/0223261 A1) discloses an image pickup device that recognizes the object that the user is attempting to capture as the subject, tracks the movement of that subject, and can continue tracking the movement of the subject even when the subject leaves the capturing area so that the subject can always be reliably brought into focus. The image pickup device includes a main camera that captures the subject; an EVF that displays the captured image captured by the main camera, a sub-camera that captures the subject using a wider capturing region than the main camera, and a processing unit that extracts the subject from the captured images captured by the main camera and the sub-camera, tracks the extracted subject, and brings the subject into focus when an image of the subject is actually captured. When the subject moves outside of a capturing region of the main camera, the processing unit tracks the subject extracted from the captured image captured by the sub-camera.
Ohya et al. (U.S. Patent Publication Number 2022/0067917 A1) discloses first imaging device and a second imaging device are configured to directly receive a signal from a trigger generation circuit. A processing device processes first image data captured by the first imaging device in response to a first trigger signal, and second image data captured by the second imaging device in response to a second trigger signal. The first trigger signal and the second trigger signal are signals generated to start capturing images at the same time point. The processing device performs recognition processing of a target included in the first image data and the second image data.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PADMA HALIYUR whose telephone number is (571)272-3287. The examiner can normally be reached Monday-Friday 7AM - 4PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PADMA HALIYUR/Primary Examiner, Art Unit 2639 March 11, 2026