DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 5, 2026 has been entered. Claims 1, 3-14 and 16-29 remain pending in the application.
Information Disclosure Statement
The information disclosure statement filed 03/10/2023 fails to comply with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609 because the reference US-64490010-B1 has a typographical error. The correct reference US-6449010-B1 has been found, considered, and cited on the PTO-892. Applicant is advised that the date of any re-submission of any item of information contained in this information disclosure statement or the submission of any missing element(s) will be the date of submission for purposes of determining compliance with the requirements based on the time of filing the statement, including all certification requirements for statements under 37 CFR 1.97(e). See MPEP § 609.05(a).
Claim Objections
Claims 1, 3, 14, 20-21 and 29 are objected to because of the following informalities:
In claim 1 line 8, “the target” should read “a target”
In claim 1 line 11, “the processor is further configured” should read “a processor is configured”
In claim 3 lines 5-6, “of each of the multiple cameras” should read “of the multiple cameras”
In claim 14 line 8, “the first secondary camera” should read “a first secondary camera”
In claim 14 line 11, “the second secondary camera” should read “a second secondary camera”
In claim 20 line 9, “the first secondary camera” should read “a first secondary camera”
In claim 20 line 12, “the second secondary camera” should read “a second secondary camera”
In claim 21 line 3, “deactive” should read “deactivate”
Claim 29 should be dependent on claim 28
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 3-4, 11, 14, 16-17 and 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over McCoy et al. (US 2015/0116501 A1) in view of Damstra et al. (US 2018/0270427 A1) and in further view of Hu et al. (CN108111818A).
Regarding claim 1, McCoy discloses an image capturing system for controlling a plurality of cameras (paragraph 0012: “a network that is capable of communicatively coupling a plurality of cameras, a plurality of sensors, and a controlling device”), the system comprising: a primary camera (paragraph 0056: “the processor 202 may select the first camera 104a such that the first camera 104a may capture a front image of the first object 102a”); first and second secondary cameras configured to capture images of a scene (paragraph 0075: “the disclosure can be implemented for any number of cameras that may track an object. For example, the first object 102a may be tracked simultaneously by the first camera 104a and the second camera 104b selected by the processor 202”; it is noted that while the first camera 104a and the second camera 104b are being mapped to the first and second secondary cameras as an example, because it is disclosed any number of cameras may track the object, the first and second secondary cameras may be any two cameras, such as cameras 104b and 104c in FIG. 3A); and wherein the primary camera, in response to being allocated the target in the scene (paragraph 0033: “Based on the metadata associated with the first object 102a to be tracked…controlling device 108 may select the first camera 104a to track the first object 102a”), is configured to lock on and track the target (paragraph 0033: “The controlling device 108 may focus the selected first camera 104a such that the first object 102a lies within the field of view of the selected first camera 104a. When the current position of the first object 102a changes, the selected first camera 104a may track the first object 102a”); wherein, responsive to the target being in a field of view of the first secondary camera (paragraph 0056: “the processor 202 may select the first camera 104a such that the first object 102a lies in the field of view of the first camera 104a”), a processor is configured to
activate and control the first secondary camera to track (paragraph 0059: “the processor 202 may track the first object 102a by using the selected first camera 104a”) and capture images of the target (paragraph 0056: “the first camera 104a to capture image of the first object 102a”); and wherein, responsive to the target moving (paragraph 0059: “the processor 202 may be operable to switch between multiple cameras based on the change in location of the first object 102a”) into a field of view of the second secondary camera (paragraph 0097: “the controlling device 108 may select the second camera 104b such that the first player 304a may lie in the second field of view 310b of the second camera 104b”), the processor is further configured to activate and control the second secondary camera to track and capture images of the target (paragraph 0059: “In such a case, the processor 202 may select the second camera 104b. The processor 202 may track the first object 102a using the second camera 104b”) and calibrate a parameter to maintain visual consistency of the images of the target captured by the second secondary camera with the images of the target captured by the first secondary camera (paragraph 0061: “the processor 202 may coordinate the adjustment of one or more parameters and/or settings of the multiple cameras…such that each of the first camera 104a, the second camera 104b, and the third camera 104c may capture images and/or videos of a particular area in a room”). However, McCoy fails to explicitly disclose calibrating a parameter received from second secondary camera; and responsive to the primary camera locking on the target, the processor is further configured to activate and control the first secondary camera to track and capture images of the target.
In the related art of multi-camera capturing of media content, Damstra discloses calibrating a parameter received from second secondary camera (Damstra paragraphs 0044-0045, 0061: “the plurality of cameras 20A, 20B and 20C can also be configured to transmit its respective metadata to a central controller (e.g., a content capture controller) or computing unit” where “The camera metadata can include, for example, camera lens information (e.g., focal length, maximum aperture, and the like),…pan/tilt/roll data, and other camera settings” and “use this information to match…camera parameters”). McCoy teaches the controlling device may receive metadata associated with an object (McCoy paragraph 0027) and the metadata associated with an object to be tracked may include any information capable of identifying an object to be tracked (McCoy paragraph 0048). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified McCoy to incorporate the teachings of Damstra to include camera parameters in the received metadata to improve video production workflow (Damstra paragraph 0007). However, McCoy, modified by Damstra, still fails to explicitly disclose responsive to the primary camera locking on the target, the processor is further configured to activate and control the first secondary camera to track and capture images of the target.
In the related art of multi-camera monitoring of a target, Hu discloses responsive to the primary camera locking on the target (Hu paragraphs 0015-0017: the main camera detects and selects the target and obtains the target location), activating and controlling the first secondary camera to track and capture images of the target (Hu paragraphs 0017, 0070: “Based on the candidate target location and the position mapping relationship between the main camera and the slave camera, the slave camera calculates the lens azimuth angle and zoom magnification, adjusts the slave camera to align with the candidate target area, and obtains a high-quality image of the candidate target”; “the slave camera is selected for tracking”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy to incorporate the teachings of Hu to actively acquire high-quality images of targets in flexible scenarios (Hu paragraphs 0005-0007).
Regarding claim 3, McCoy, modified by Damstra and Hu, discloses the system of claim 1, further comprising a second primary camera configured to capture images of the scene (McCoy paragraph 0036: “the controlling device 108 may select the first camera 104a and the second camera 104b to track the first object 102a and the second object 102b respectively”), wherein the system is configured to capture images of multiple targets simultaneously (McCoy paragraph 0034: “the multi-camera system 100 may be operable to simultaneously track two or more objects, such as the first object 102a and the second object 102b”), and wherein each target is allocated a controller (Damstra paragraph 0044: “each camera 20A, 20B and 20C includes a computer processing unit (CPU), memory and other common hardware and software components configured to store and/or update information relating to the camera settings, control operations, physical parameters, and the like”) configured to provide parameters of the multiple cameras to the processor (Damstra paragraph 0044: “the information stored in each camera and communicated to other cameras and/or a central control system is referred to as camera metadata”).
Regarding claim 4, McCoy, modified by Damstra and Hu, discloses the system of claim 3, wherein conflict between two controllers over activation and control of secondary cameras (McCoy paragraph 0057: “two or more cameras may satisfy the pre-determined criteria”) is resolved by priority rules (McCoy paragraph 0057: “the processor 202 may be operable to select a camera from the two or more cameras based on a pre-defined priority order associated with the two or more cameras”).
Regarding claim 11, McCoy, modified by Damstra and Hu, discloses the system of claim 1, wherein the processor is located within the primary camera (McCoy paragraph 0029: “the controlling device 108 may be an integrated part of a camera, such as the first camera 104a”).
Regarding claim 14, it is the corresponding method executed by the system claimed in claim 1. Therefore, McCoy, modified by Damstra and Hu, discloses the limitations of claim 14 as it does the limitations of claim 1.
Regarding claim 16, McCoy, modified by Damstra and Hu, discloses the method of claim 14, further comprising: allocating a second target in the scene to a second primary camera (McCoy paragraph 0036: “the controlling device 108 may select…the second camera 104b to track…the second object 102b”); capturing images of the target and the second target simultaneously (as claimed in claim 3); and receiving parameters of each primary and secondary camera (as claimed in claim 3).
Regarding claim 17, McCoy, modified by Damstra and Hu, discloses the method of claim 16, further comprising: providing priority rules configured to resolve conflicts over activation and control of the secondary cameras (as claimed in claim 4).
Regarding claim 20, it is the corresponding non-transitory computer-readable medium storing computer-executable instructions executed by the system claimed in claim 1. Therefore, McCoy, modified by Damstra and Hu, discloses the limitations of claim 20 as it does the limitations of claim 1.
Regarding claim 21, McCoy, modified by Damstra and Hu, discloses the system of claim 1, wherein the processor is further configured to both activate and control the second secondary camera to track and capture images of the target and to deactivate the first secondary camera (McCoy paragraph 0059: “the processor 202 may be operable to switch between multiple cameras based on the change in location of the first object 102a”; switching from a first camera to a second camera implies deactivating the first camera and activating the second camera) in response to the target the target moving out of the field of view of the first secondary camera (McCoy paragraph 0059: “when the location of the first object 102a changes, the first object 102a may move out of the field of view of the selected first camera 104a”) and into the field of view of the second secondary camera (McCoy paragraph 0097: “the controlling device 108 may select the second camera 104b such that the first player 304a may lie in the second field of view 310b of the second camera 104b”).
Regarding claim 22, it is the corresponding method executed by the system claimed in claim 21. Therefore, McCoy, modified by Damstra and Hu, discloses the limitations of claim 22 as it does the limitations of claim 21.
Regarding claim 23, it is the corresponding non-transitory computer-readable medium storing computer-executable instructions executed by the system claimed in claim 21. Therefore, McCoy, modified by Damstra and Hu, discloses the limitations of claim 23 as it does the limitations of claim 21.
Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra and Hu in view of Shao (CN114359351A).
Regarding claim 5, McCoy, modified by Damstra and Hu, discloses the system of claim 4. However, McCoy fails to explicitly disclose the priority rules are based on relative distance between the targets and the secondary cameras, or profile data of the targets, or combinations thereof. In the related art of target tracking, Shao discloses the priority rules are based on relative distance between the targets and the secondary cameras, or profile data of the targets (Shao paragraph n0083: “the priority of the first target and the priority of the second target…can be determined by the controller according to the attributes of the first target and the second target. For example, the controller determines that the first target is a moving target and the second target is a static target and is not dangerous, then the controller can determine that the priority of the first target is higher than the priority of the second target”), or combinations thereof. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy to incorporate the teachings of Shao to ensure the target with higher importance is not lost (Shao paragraphs n0014, n0084).
Claim(s) 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra and Hu in view of Yee (US 2018/0077345 A1).
Regarding claim 6, McCoy, modified by Damstra and Hu, discloses the system of claim 1, further comprising providing camera views interpolated from views obtained by one or more of the primary and secondary cameras (Damstra paragraphs 0063-0064: “two overlapping scenes 26A and 26B are currently being captured…the new viewpoints can be created with interpolation and warping algorithms for virtual reality applications”). However, McCoy and Damstra fail to specifically disclose one or more virtual cameras. In the related art of camera control, Yee discloses one or more virtual cameras (Yee FIG. 5, paragraph 0109: “virtual camera views 510, 520, 530, 540 and 550”) configured to provide camera views interpolated from views obtained by one or more of the primary and secondary cameras (Yee FIG. 8, paragraph 0109: “The virtual camera views 520 to 550 are generated by interpolating between numbers of physical cameras”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy and Damstra to incorporate the teachings of Yee to frame close up shots of the players as if the virtual camera is at player level and on the field during play, when physical cameras cannot be on the field during play (Yee paragraph 0108).
Regarding claim 19, it is the corresponding method executed by the system claimed in claim 6. Therefore, McCoy, modified by Damstra, Hu and Yee, discloses the limitations of claim 19 as it does the limitations of claim 6.
Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra and Hu in view of Chen (US 10404915 B1).
Regarding claim 7, McCoy, modified by Damstra and Hu, discloses the system of claim 1, wherein the primary camera comprises computer circuitry (McCoy paragraph 0018: “The cameras 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to capture and/or process an image and/or a video content”). However, McCoy fails to disclose the computer circuitry is programmed to combine the images to form a panoramic view of the target and to send the combined image to a server for further processing. In the related art of generating panoramic images, Chen discloses computer circuitry programmed to combine the images to form a panoramic view of the target (Chen col 6 lines 20-26: “panoramas stitched from a set of original images captured by cameras 101, 102, and 103”) and to send the combined image to a server for further processing (Chen col 15 lines 44-48: “send to computing devices 602 and 604 via a wireless communication method”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy to incorporate the teachings of Chen to further process panoramic images or videos according to an end user's needs, such as 3D modeling, object tracking, and virtual reality programs (Chen col 8 lines 13-28).
Claim(s) 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra and Hu in view of Boulanger et al. (US 2003/0067536 A1).
Regarding claim 8, McCoy, modified by Damstra and Hu, discloses the system of claim 1, wherein the primary camera comprises computer circuitry (McCoy paragraph 0018: “The cameras 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to capture and/or process an image and/or a video content”). However, McCoy fails to disclose the computer circuitry is programmed to remove background from the images. In the related art of virtual videoconferencing, Boulanger discloses computer circuitry programmed to remove background from the images (Boulanger paragraph 0033: “The system then separates the image of each participant 102a, 102b from the background that appears in the respective video images”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy to incorporate the teachings of Boulanger to provide a realistic immersive three-dimensional environment for videoconferencing participants (Boulanger paragraph 0010).
Regarding claim 9, McCoy, modified by Damstra, Hu and Boulanger, discloses the system of claim 8, wherein the computer circuitry of the primary camera is further programmed to transfer the images into a virtual environment (Boulanger paragraph 0034: “Each transformed video pair is then transmitted to the other participant 102a, 102b and incorporated into the respective participant's view of the virtual meeting”) rendered by a receiving device (Boulanger paragraph 0049: “The virtual meeting scene is rendered as a stereo pair of video images overlaid on the 3D virtual model of the meeting room, from the perspective of the position of the local participant”).
Regarding claim 10, McCoy, modified by Damstra, Hu and Boulanger, discloses the system of claim 9, wherein the virtual environment enables hosting videoconferencing sessions (Boulanger paragraph 0024: “provide a single seamless, immersive videoconferencing environment for a plurality of participants…permits multiple participants to experience an illusion of immersion in a real meeting in a three-dimensional (3D) virtual space”), wherein the images rendered in the virtual environment represent participants of a videoconferencing session (Boulanger paragraph 0049: “the respective video images (remote participant's and virtual object(s)) are combined to create a composite stereo view for the local participant”).
Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra and Hu in view of Venshtain et al. (US 2019/0043266 A1).
Regarding claim 12, McCoy, modified by Damstra and Hu, discloses the system of claim 1, wherein the primary and secondary cameras (McCoy paragraph 0026: “sensors 106 may be an integrated part of the cameras 104”) are configured to capture depth information (McCoy paragraph 0026: “The sensors 106 may be operable to determine a location of the objects 102 relative to the cameras 104”). However, McCoy fails to disclose converting the depth information into 3D meshes. In the related art of three-dimensional mesh generation, Venshtain discloses converting the depth information into 3D meshes (Venshtain paragraph 0123: “generates 3D meshes of a subject from depth data gathered from multiple vantage points of the subject”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy to incorporate the teachings of Venshtain to provide the viewer with a 360° 3D virtual experience of a 3D persona (Venshtain paragraph 0094).
Claim(s) 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra, Hu and Venshtain in view of Overbeck et al. (WO 2024/107872 A1).
Regarding claim 13, McCoy, modified by Damstra, Hu and Venshtain, discloses the system of claim 12, wherein the secondary cameras are configured to send the 3D meshes (Venshtain paragraph 0123: “PSS 202 obtains frame-synchronized 3D meshes of the subject by receiving such 3D meshes from another entity such as the DCs 306D [depth-capture cameras]”) to the primary camera (McCoy paragraphs 0018, 0029: “The cameras 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to capture and/or process an image and/or a video content”; in particular, “the controlling device 108 may be an integrated part of a camera, such as the first camera 104a”), wherein the primary camera is configured to process (Venshtain paragraph 0256: “mesh-tuning processes are carried out”), and compress the 3D meshes before sending the compressed 3D meshes to a receiving device (Venshtain paragraph 0293: “compress the visible-vertices lists [aka submesh] prior to transmitting them to the rendering device”). However, McCoy and Venshtain fail to explicitly disclose combining the 3D meshes. In the related art of three-dimensional mesh generation, Overbeck discloses combining the 3D meshes (Overbeck paragraph 0046: “a synthesized mesh is generated based on the plurality of meshes”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified McCoy and Venshtain to incorporate the teachings of Overbeck to generate layered meshes that achieve similar quality to multi-plane images but with far fewer layers, enabling an efficient way to perform learned upsampling and rendering (Overbeck paragraph 0046).
Regarding claim 18, it is the corresponding method executed by the system claimed in claim 13. Therefore, McCoy, modified by Damstra, Hu, Venshtain and Overbeck, discloses the limitations of claim 18 as it does the limitations of claim 13.
Claim(s) 24-29 are rejected under 35 U.S.C. 103 as being unpatentable over McCoy, Damstra and Hu in view of Kanade et al. (US 2002/0118286 A1).
Regarding claim 24, McCoy, modified by Damstra and Hu, discloses the system of claim 1. However, McCoy fails to disclose the processor is further configured to calibrate a plurality of parameters of the second secondary camera to maintain visual consistency by maintaining a size and focus of the images of the target captured by the first and second secondary cameras. In the related art of controlling multiple cameras, Kanade discloses the processor (Kanade paragraph 0031: “the master control unit 24”) is further configured to calibrate a plurality of parameters of the second secondary camera (Kanade paragraph 0031: “the slave control module 43 may compute the PTZF parameters for each of the slave camera systems 16 based on the determined target position and size”) to maintain visual consistency by maintaining a size (Kanade paragraph 0008: “determining, based on parameters of the first variable pointing camera system, parameters for at least a second variable pointing camera system such that, at a point in time, the first and second variable pointing camera systems are aimed at the target and a size of the target in an image from the first and second variable pointing camera systems is substantially the same”) and focus of the images of the target captured by the first and second secondary cameras (Kanade paragraph 0027: “the slave camera systems 16 may track the same target as the master camera system 14 and with the same focus”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified McCoy to incorporate the teachings of Kanade to smoothly and accurately track a moving target within a dynamic scene (Kanade paragraphs 0007, 0027).
Regarding claim 25, McCoy, modified by Damstra, Hu and Kanade, discloses the system of claim 24, wherein the processor is further configured to calibrate a plurality of parameters of both the first and second secondary cameras (Kanade paragraph 0031: “the slave control module 43 may compute the PTZF parameters for each of the slave camera systems 16 based on the determined target position and size”) to maintain visual consistency by maintaining the size (Kanade paragraph 0008: “determining, based on parameters of the first variable pointing camera system, parameters for at least a second variable pointing camera system such that, at a point in time, the first and second variable pointing camera systems are aimed at the target and a size of the target in an image from the first and second variable pointing camera systems is substantially the same”) and focus of the images of the target captured by the first and second secondary cameras (Kanade paragraph 0027: “the slave camera systems 16 may track the same target as the master camera system 14 and with the same focus”).
Regarding claim 26, it is the corresponding method executed by the system claimed in claim 24. Therefore, McCoy, modified by Damstra, Hu and Kanade, discloses the limitations of claim 26 as it does the limitations of claim 24.
Regarding claim 27, it is the corresponding method executed by the system claimed in claim 25. Therefore, McCoy, modified by Damstra, Hu and Kanade, discloses the limitations of claim 27 as it does the limitations of claim 25.
Regarding claim 28, it is the corresponding non-transitory computer-readable medium storing computer-executable instructions executed by the system claimed in claim 24. Therefore, McCoy, modified by Damstra, Hu and Kanade, discloses the limitations of claim 28 as it does the limitations of claim 24.
Regarding claim 29, it is the corresponding non-transitory computer-readable medium storing computer-executable instructions executed by the system claimed in claim 25. Therefore, McCoy, modified by Damstra, Hu and Kanade, discloses the limitations of claim 29 as it does the limitations of claim 25.
Response to Arguments
Applicant's arguments with respect to independent claims 1, 14 and 20 have been fully considered but they are not persuasive.
Regarding the argument that “adjusting parameters [in McCoy] is for directing multiple cameras to cover the same spatial region and not, as the claims require, for maintaining visual consistency of images of a target captured by different cameras as the target moves from a field of view of one to the field of view of the other…there is no visual consistency contemplated in McCoy”, MPEP 2111.01 Plain Meaning sections I, II, and III disclose the words of a claim must be given their “plain meaning” unless such meaning is inconsistent with the specification, it is improper to import claim limitations from the specification, and “plain meaning” refers to the ordinary and customary meaning given to the term by those of ordinary skill in the art, respectively. Under broadest reasonable interpretation, imaging the same spatial region and/or the same target is maintaining visual consistency of the images captured by different cameras by having the visual contents of the images be consistent.
Furthermore, regarding the argument that “Paragraph [0061] of McCoy describes simultaneous coordination of multiple cameras viewing the same area – not a handoff scenario where visual consistency must be preserved across cameras as a target transitions between them”, it is noted that the features upon which applicant relies (i.e., "as the target moves from a field of view of one to the field of view of the other") are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The independent claims, as amended, require the target moving into a field of view of the second secondary camera, but not necessary the target moving out of the field of view of the first secondary camera. Thus, it is possible for the target to be simultaneously in both the field of view of the first secondary camera and the field of the view of the second secondary camera when the parameters are calibrated to maintain visual consistency.
Regarding the arguments that “Damstra matches camera parameters for the purpose of creating alternative viewpoints of the same scene captured simultaneously by multiple cameras. Damstra does not calibrate a camera parameter to maintain visual consistency of images of a moving target as it transitions from one camera's field of view to another camera's field of view” and “The adjustment of the slave camera in Hu is for acquiring high-quality images of a target – not the calibration of a parameter to maintain visual consistency of images of a moving target as it transitions from one camera's field of view to another camera's field of view”, they are not persuasive for the same reasons as delineated above regarding the arguments against McCoy.
Applicant’s argument that “There is no teaching of calibrating a parameter(s) to ensure the images themselves are visually consistent (e.g., same size and focus per dependent claims)” with respect to newly added claim(s) 24-29 has been considered but is moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.Z./ Examiner, Art Unit 2677
/ANDREW W BEE/ Supervisory Patent Examiner, Art Unit 2677